cyklondx

Limp Gawd
Joined
Mar 19, 2018
Messages
419
Hi,
I created this thread to show off some random computer hardware - retro, and new, desktop and server.

It will likely contain my opinions on the hardware, photos, screenshots, tests, problems i've encountered, what i liked and what i didn't.

Note1: Most of cases will be in racks, and they aren't for "show", i only care about good/decent temps for 24x7x365 ops with 100% utilization.
Note2: I mainly use them to either play around, run BOINC projects, 3d graphics software (3dsmax, vue, poser etc.), and sometimes games.
Note3: 1U, 2U, 4U, 6U describes the size of the rack case. (use duckduckgo/google for examples)

Starting with, let me tell you about current hardware i'm playing with from newest to oldest:

#1

4U Update@Feb 2020
Mobo = MSI x470 Pro Gaming Carbon
CPU = Ryzen 1700x @ 3.9GHz (water loop #1)
RAM = 32GB DDR4 3333MHz
GPU = Radeon VII @ PCI-E 3.0 x8
Drives =
2x NVMe XPG 8200 PCI-E 3.0 x8 in m.2 slots (opensuse)
3x Intel SSDSC2BA400G3 in RAID0 @ 2at sata , and 1at sas.
PSU = Platimax 850W

#2

4U
Mobo = MSI x370 Pro Gaming Carbon
CPU = Ryzen 1700x @ 3.7GHz (water loop #1)
RAM = 16GB DDR4 3200MHz
GPU = Evga Nvidia 980Ti Hydro
Drives =
1x NVME Samsung 960 @ M.2 PCI-E 3.0 x4
1x Mushin MKNSSDEC240GB
1x Kingston SV300S37A240G
2x Seagate ST4000DM000-1F2168 (RAID1)
1x WDC WD20EURS-73S48Y0
PSU = Platimax 850W

#3 -- Decommissioned.
6U
Mobo = Gigabyte x79 UD3
CPU = Intel 4930k @ 4.2GHz (water loop #2) | *swap* 4820k @ 5GHz
RAM = 32GB DDR3 1600MHz
GPU = Fury-X @ PCI-E 3.0 x16
Drives =
2x Patriot SSD 60GB
PSU = OCZ 700W 80+ Bronze

#4

4U (IBM xSeries 225)
Mobo = IBM E7505 Master-LS2 MSI-9121
CPU = 2x Intel Xeon @ 3.2GHz Gallatin
RAM = 8GB DDR1 266MHz
GPU = ATi Radeon x1950 Pro 512MB @ AGP 8x | Palit NV 7600GS 256MB | ATi FireGL 9500 128MB || (Swap)
Drives =
1x SCSI Ultra320 Seagate 300GB 15k
5x SCSI Ultra320 MIX 146GB 15k (RAID 1)
1x Generic IBM CD-ROM
1x Generic IBM Floppy
PSU = IBM 425W (upgraded from 300W)


I also have plenty of hardware just laying about, which could be used to build more PC's, but there have no use for me - and my rack is full anyway.

In the following posts I will describe and post latest shots of what i'm currently playing with.

My own SSD Ranking
1. Intel SSDSC2BA400G3 (Durability, Performance)
I think this model outclasses many SSD's just because its so durable. - Still going strong.
- If you plan on doing a lot r/w DataCenter or Home, this SSD is great, it will last for years without slowing down.
1583351263560.png


2. Hynix Gold S31 1TB (Cheap, Best Sustained Performance)
This SSD isn't very well known, but we have mounted 2 in raid1, and they've been going. Their sustained r/w performance is great. Its much better, if you are looking for performance with decent durability than Samsung Pro 860.

3. Crucial MX500 SSD's (Cheap, Good Sustained Performance)
We have been using those SSD's since they are cheap, and are reliable - though their durability is bit lacking - they seem to be slowing down over time even without big r/w.

4. Samsung 860 (Evo/Pro) (Expensive, Good Endurance, Not great Sustained Performance)
What to say... expensive, has great renown. But its not that great anymore... It has great burst performance up to couple GB's but then slows down, and thats why its ranked below crucial and Hynix. If you aren't doing a lot of writes, then this SSD is just fine. But if you are, you will quickly cap its fast cache, and write at much lower speeds.

5. Kingston SV300S37A240G (Old, Surprisingly good Endurance, Old)
This SSD has surprisingly good endurance vs other SSD's from the past. I had done over 40TB writes, and drive still has 99% of life remaining according to smart log.

6. Mushkin Reactors (Disappointing, Good Performance, Bad Endurance, Buggy Smart Logger)
Well I always liked Mushkin as a company, but the SSD's are not on par with other makers. It has performance yes... but endurance is not there, and is loosing durability just overtime like crucial mx500's (unless its a bug in smart).
 
Last edited:
Recent update updating the #4 with new CPU's, Audio card and 5x SCSI drives 146GB (got it for free from work).
#Note: Once I install system on this box I'll benchmark the changes and obviously i'll enjoy the quality of the sound :^)


Replacing the 2x Xeon 2.6GHz Prestonia with 3.2GHz Gallatin CPU's
NkwsFWG.jpg

2jKZ4wT.jpg

Left Prestionia, Right Gallatin

As you can see here is the comparison how the look changed (they are both compatible, and same socket.) I personally love how they looked in the Prestonia.
Note: Some Prestonia models like SL6VM were already looking like the Gallatin. They weren't as flashy as this.

I've bought a pair of Gallatin's 3.2GHz for $14 and seller sent me 4, since he wasn't sure they worked. One had missing pin (on the picture below with pins up), and one didn't want to post.
2 were functional *yey!

9t4aDqA.jpg

zPVnBZJ.jpg



Audio card
Sound Blaster Audigy 2 Zs SB0350 PCI
lUSc8TE.jpg

kEXh6iW.jpg


5x SCSI Ultra320 Drives came in from DELL servers and had their tray caddies. I had to get new ones from ebay. (sorry for lack of pictures of actual drives - they are already formatting, but here are the trays i removed.)
The IBM trays that came in had one broken.

Broken IBM tray
vMGUyLY.jpg


Here's the Dell's tray i removed (got 4 of them, if any1 wants them let me know - i'll send it even for free.)
VHZWVer.jpg


Here's the SCSI controller bios where we see all of the drives.
Ldzpqo0.jpg


Formatting them is a pain on avg it takes 1h, and 300GB one took 1h 48m | I cannot make the virtual disk raid with IBM iso until all drives are formatted under bios :<
8XVcPv0.jpg


while I wait for the drives to format I drink a new soda i bought from Polish shop near me *Its very good.
bej3F53.jpg



Here's how it looks like with all drives filled (sweet, and heavy as fffu)
gPuhjdC.jpg


I'll post more info, once i get the formatting done, and finish thinking which system i want to install (Windows 2000, XP Sp3, or 2003 Server)
 

Attachments

  • byZzDNC.png
    byZzDNC.png
    503 bytes · Views: 74
Last edited:
Done with the formatting (around 5h for all drives) installation did go smoothly.

At first I installed the 2003 Server. It worked fine, but there were problems with DX8 - and not to mention old browser problems ie5. You cannot do any SSL connection, but thankfully 03 had Ethernet drivers. You cannot do the windows updates, until you manually download service packs on another computer, and later install ie6 and ie7; only then you can start doing the actual update process.
It took about 5 reboots for updates after that, only to find that I needed to install .net 2.0 x86 (which i couldn't find) In the end I ended up installing the .net 2.0 service packs 1 and 2. That thankfully worked.
After installing catalysts for radeon x1950, i proceeded to install old 3dmarks. To my dismay it wouldn't run the 3dmark 01 as no DX8 could be run for some odd reason.
Next steps I took more into making the system more into what I typically use. Deamon Tools 1.4 version (older version that would work with it.) well RIP... the system didn't want to come back up.
I tried going from safety mode and even debugging mode, and was able to remove deamon tools, but I couldn't get the system to boot anymore (i think problems with scsi.)

Right now I have WinXP with all updates, all 3dmarks work just fine, and very similar issues - and one big one... no drivers for the network interface.

I'll attach some benchmarks etc.

On this specific workstation my-to-do would be acquiring

another set of smaller speakers
some cooling for drives (boi they get hat! while running small bench on the drives they've reached 91'C within 30min). - I'll likely will have to cut out holes in the drive cage, and mount some fan in front.
better cooling on the xeon's, a single 120mm fan for those 2 is not enough ;3

sS9uRM3.jpg


I've turned off those 2 fans that previously supposed to cool those 2 down - they were too loud. (and the heatsinks were not made for this board, they were ones i found somewhere laying around and they were able to fit.)
While taking a look at it, and its almost time to finish up this retro PC, I've noticed that the fan of the x1950 fan is kinda too high up, and i won't be able to close the case. I'll likely look for riser agp card and put it vertically instead. It would also help the air flow (as it wouldn't be blowing the hot air on the mobo.)
 
May be offtopic but i spoke with oracle rep from their server/cloud infrastructure. As we spoke i had found out that they switched to amd epyc for their whole cloud infrastructure. (On datastax accelerate conference) it also seems HP considers to switch compleatly to amd by 2020.

Adobe uses puppet and cassandra 3.
Their presentation was still hand drawn and squares were not stright rofl.
They use amazon linux 2017

They are serial ssd/nvme killers at amazon as they chunk out 6-8tb data writrs every day.
And only run trim at the end of the day.
(They complained that after couple days they were alreadybimpacted by 10x slower io.)


//Just a note to those interested.
 
Last edited:
Some random speculation on Zen2 Epyc CPU | just speculations.

14nm Epyc 7601 (32 cores 4 chiplets)
Die = 852 mm^2
Transistor = 4,800 million ?Incorrect? or should be x4 = 19,200 millions
Density = 22,535 T/mm^2 | or 90,140 T/mm^2

45W per chiplet @ 3.2GHz
=================================================

12nm TR 2990WX (32 cores 4 chiplets) ~ taking this data is correct.
Die = 852 mm^2
Transistor = 19,200 million
Density = 90,140 T/mm^2

62W per chiplet @ 4.2GHz -> +37%(W) +31% (Clock)
=================================================

7nm+14nm Epyc X (64 cores 8 chiplets) ~ Transistor counts and Densisty is calculated without IO die
Die = 1040mm^2 (75mm^2 x8 = 600mm^2 + 440mm^2 IO) | (@ 600mm^2 -42%)
Transistor = ? (@ 39,968 T/mm^2 @ 600mm^2) = ~23,980 million (+24% over 12nm)
TransistorErrorRate = ~25,599 million (+6.7% Error Rate) | ~21,000 million ( -12% Error Rate)
Density = ? (39,968 T/mm^2)
DensityErrorRate = ~42,666 T/mm^2 (if chiplet = 3.2B) | expected @35,000T/mm^2 (+21% Error Rate)

Total Wattage = should be around ~300W | 33W per component
Max aprox = 350W | 38W per component
Low aprox = 240W | 26W per component

I am unsure about the IO, but it may take more power (and likely is) than the cpu chiplet. If thats the case we may be looking +50W to either making max aprox 400W.

=================================================

based on what we see on GPU's (taken within error)

14nm
Die Size = 232 mm^2
Transistors = 5,700 million
Density = 24,568 T/mm^2


Die Size = 495 mm^2 (+46%)
Transistors = 12,500 million (+45.6%)
Densisty = 25,252 T/mm^2


7nm
Die Size = 331 mm^2 (-33%)
Transistor = 13,230 million (+5.8%)
Densisty = 39,969 T/mm^2 (58.2%)
 
Last edited:
Been playing with ZFS for first time today (as a project to create logarchive storage)

Using Dell PowerEdge 720
2x E5-2690 @ 2.9GHz
32GB DDR3 1600MHz (2x 16GB - yeah i know)

8x 4TB Seagate ST4000NM0023
upload_2019-10-25_9-29-34.png

While system is installed on nvme "xpg sx8200 pro" on pci-e 3.0 card.

ZFS is using raidz2 pool
giving us 22TB (21TB usable) / out of 32TB of real space.




Cool stuff
Enabled lz4 compression
Copying in the logs (got older server with over 8-9TB of logs - conventional windows server)

Copied 152GB of logs as of this moment

Code:
rpool/logarchive  refcompressratio      4.70x                  -
rpool/logarchive  written               33.1G                  -
rpool/logarchive  logicalused           152G                   -
rpool/logarchive  logicalreferenced     152G                   -

wut? 4.7x compression ratio, actual space taken 33GB. If this keeps up (compression ratio) I would be able to push around 100TB of logs to this zfs pool.

Noice



upload_2019-10-25_9-45-26.png



Feels_good_man.jpg


As a first time, ZFS seems to be a revolution over typical raid setups.
 

Attachments

  • upload_2019-10-25_9-29-59.png
    upload_2019-10-25_9-29-59.png
    25.1 KB · Views: 0
Cockpit a very useful tool

upload_2019-10-25_10-4-7.png


You can look at logs (inside journal), and filter them by service, date, and severity! - no more tail -f on the logs :D with additional console.

upload_2019-10-25_10-6-44.png


You can do all nice manipulation on the storage (expand, shrink - lvms, create raids groups, delete partitions format etc in nice responsive gui)
upload_2019-10-25_10-8-19.png



Same goes to Networking
upload_2019-10-25_10-9-46.png


KVM's virtual machines
upload_2019-10-25_10-10-23.png


You can visually edit Targets, Services (systemctl / service), Sockets (you can kill), Timers (like crontab but new - btter), PATHs.

upload_2019-10-25_10-12-2.png

and if you click on service it will show you options etc (and latest journal logs)

upload_2019-10-25_10-12-39.png



there are also software updates (but i have disabled repo, so its not going to show at all on my system/s) it also has built in terminal console.

You can also view dashboard and jump between servers, if you have added them. by default it will show you cpu/mem/net/disk io stats / if you click at any of the hosts that are added - you can jump onto them and have full control to previous described modules on those servers (from single location)
upload_2019-10-25_10-15-25.png
 
Had bought used supermicro chassis 846E16-R1200B with SAS2 backplane for 24 disks.
XAZYtaR.jpg


stock insides
8779O0h.jpg


Replaced the 2x80mm fans with noctua nf-r8 redux-1800 PWM for more cooling and silence.
also replaced the whole 3 fans on the front with 3x120mm noctua nf-f12 industrialPPC-2000
eSf02Da.jpg


I also proceeded to start thinking how to place watercooling / locations etc ~~ in the end locations differed. (the mobo is biostar something really old i used to map out atx holes, and work out placement)
yLouLU2.jpg


(final locations - poor picture, i know), i'll post a better one soon.
1wnvn9i.jpg



Other details
- yes, radiators are actually outside inside the rack, so pipes are going outside the case. *(using alphacool 29886 hf 38 slot cover panama), with 2x alphacool quick releases on the other side.
- i used plastic adhesive insulated standoffs with sticky mount (they are great - really easy to mount motherboard - no screws, no tapping etc)
- using alphacool 13194 vpp655 pwm (without res version) pump.
- mid 3x 120mm fans, reservoir, and pump are mounted using velcro (its great, as without any tapping was i was able to just place it in unlike locations - not to mention it stabilizes them and reduces noise further.
- used deionised water.
- system currently has 3 SSD's, (2x internal startup at sata natively connected to mobo), and 1 SSD on sas backplane. Used ********** 2.5" to 3.5" adapter, appears to be quite cheap - and high quality for $8.
- for hdd's i have acquired lot of 20 HGST huh728080al5204 8TB HDD's for $600 (still waiting for them)
s-l1600.jpg


- sas controller was a pain... used lsi sas 9207-8i (it came in with v15 firmware...) problems with reads and writes disks would timeout.
I have upgraded to latest v20 firmware only to find out that it has more issues... reads were fine, but writes were not happening due to timeouts; after downgrading firmware through UEFI interface to v19 its working meh ok ~ reads and writes are ok; but i'm not 100% happy as my sas ssd actually disappeared once, forcing me to re-plug it and have controller rediscover it again.
---> I will be moving to intel expander for HDD/SSD's RES2SV240
https://www.ebay.com/itm/Intel-Expa...e=STRK:MEBIDX:IT&_trksid=p2060353.m1438.l2649Hopefully will resolve all the issue, and i will be able to have more bandwdith overall since this one allows for 24ports without sharing the sas cables (perfect ratio 4 devices per cable)
I hope intel doesn't fail me.

- in other specs this is x470 mobo msi pro gaming carbon with radeon vii; i had to remove 2nd gpu tho, and added 2 nvme instead.
(If anyone is interested i have pci-e x16 asus 4x nvme with 2 unused xpg 8200 nvme's)


I'll attach more photos later tonight - as i was lacking front panel 16-pin split before i could connect front buttons like power-on etc (I left myself small cable where connecting them for a brief moment will start it)
LWt4T23.jpg


to-do
get cpu 8pin to 2x 8pin pcie so that i can use supermicro power distribution board to connect the gpu (at this moment i have wired your normal psu - can't say i'm happy - it was a pain.
get a piece of wood 360mm to seal, and create more static pressure for fans for backplane, and seal the side holes as well with electrical tape or something alike.

(in terms of noise, its very quiet, fans typically spin at around 1000rpm, and system is at around 24'C idle)
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
Current project update

Figured out that sas expander performance on lsi 9207-8i is directly related to its temp... i didn't think that could generate so much heat. I'ma have to mount some small fan on it.
struggling on the layout for the moment - as i had issues with putting gpu and sas expander on same port and bifurcate it (gpu was not visible, pci-e nvme or other cards were tho - so my setup just didn't like that).
~ this means i have to use slot further out for sas controller, annnd thats a problem - i have pipes going over that port to rad. (at the moment i have managed to connect it using pci-e expander cable - but its not nice - feels flimsy)

so 2 things to do
- mount a fan on the lsi 9207-8i (have to be 40mm one) thinking of using noctua or something quiet that has some airflow.
- get some L shaped pci-e riser (something like that)
1583265167612.png


the prior idea is lost for the moment (where i wanted 2nd gpu and sas expander on same slot)
ZlC3DAz.jpg
7TwmC5o.jpg
(The raiser in the picture was done by our forum member C_Payne
https://peine-braun.net/shop/index.php?route=product/category&path=65_59
Its quality, must give it that.

The GPU is the radeon pro wx2100 (mainly for main system for kvm - had some issues with that on suse both cards are amd using same driver... might reconsider using nv card to make thing easier for myself.)
 

Attachments

  • 1583265108886.png
    1583265108886.png
    973.4 KB · Views: 0
Back
Top