Is the timing right to build this rig?

Coolermaster 212+ or EVO, great coolers that won't break the bank.

This but honestly if you're not going to OC the stock coolers on the i7 are more than capable; however, with that said an aftermarket cooler especially those mentioned above will perform significantly better than the stock intel cooler.
 
Well I went ahead and placed the order since I needed to order the PSU within the 24hr promo period. I am going to go ahead and try the stock cooling. If I need to add an after-market cooler I will. Another change that I made is that I chose a different optical drive as there is an ASUS drive with practically identical specs that is cheaper as well as having a mail-in rebate. So here is the final build. I will link to this post in the OP for reference.

$189.99 - GIGABYTE GA-Z77X-UD5H LGA 1155 Intel Z77 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard
$289.99 - Intel Core i7-3770 Ivy Bridge 3.4GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core Desktop Processor
$124.99 - G.SKILL Trident X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory
$439.99 - EVGA 04G-P4-3673-KR GeForce GTX 670 FTW+ 4GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready SLI Support Video Card
$139.99 - SeaSonic SS-660XP 660W ATX12V / EPS12V 80 PLUS PLATINUM Certified Full Modular Active PFC Power Supply
$199.99 - COOLER MASTER HAF X Blue Edition RC-942-KKN3 Black Steel / Plastic ATX Full Tower Computer Case
$349.99 - SAMSUNG 840 Series MZ-7TD500BW 2.5" 500GB SATA III Internal Solid State Drive (SSD)
$169.99 (x2 RAID1) = $339.98 - Seagate Barracuda STBD3000100 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Kit
$79.99 - ASUS Black 12X BD-R 2X BD-RE 16X DVD+R 12X DVD-RAM 8X BD-ROM 8MB Cache SATA Blu-ray Burner

Sub Total: $2,154.90

Case Coupon Code: -$30.00
Motherboard Coupon Code: -$13.00
PSU Coupon Code: -$30.00
PSU Mail-in Rebate: -$20.00
Video Card Mail-in Rebate: -$10.00
Optical Drive Mail-in Rebate: -$20.00

Total: $2,031.90

In addition I will get 1% back from my bank and 1% back from ebates. That is $21.55 each so that drops it another $43.10. So you guys helped me to drop this build from over 2.5k to below 2k at $1988.80. Awesome! I can't express how much I appreciate all the help.
 
What would you recommend? A long long time ago I bought an aftermarket cooler for a Palamino chip. It was an SLK-800 combined with a Tornado 80mm. That thing cooled really well but the tornado sounded like a jet engine. After that I started water cooling. I did that for a few years and then a leak sprung from one of the waterblocks soaking a bunch of my components. I've been using stock coolers ever since. Anyway, I am definitely willing to add a better cooling solution to my budget but I would like to stick to air and the quieter the better.

I was going to suggest something like the Corsair H60, but since you don't like water cooling, I won't :D. I'm biased about noise - indeed I think I have a bit of a reputation here about it :) - so I'll suggest you take a good look at the Nofan, unless you have a cat who likes sleeping on top of your computer case. The TDP of the 3770 is within the Nofan's limits; I dropped to the S to play it safe. I have the same motherboard as the one you're getting, and your graphics card will work at full speed in the x8 slot - do make sure you get one with a blower fan if you do go for a Nofan.
 
You may wish to download a 3rd party program such as speedfan to keep the noise down, stock coolers can be quite loud at 100%
 
Well there is no speedfan for Linux. But my fan speeds are always visible as a component of my on-screen system monitor gkrellm. gkrellm utilizes lm-sensors which also contains a fan control module. So all I have to do is enable it and configure it if I don't like the defaults. Thanks for the advice.
 
Having done a lot of further reading I am concerned that the Samsung 840 might have been a poor choice even when considering the capacity at the price point. It seems the Samsung 830 might be a better choice and the 840 pro is, of course, a much better drive. Comparing the Crucial M4 to the 840 seems to be a little more mixed. Does anyone have any thoughts on that? It seems Newegg does not take returns on SSD drives but, if it was a bad choice, I could at least call them and see if something could be worked out before I open the packaging. No matter what I realize it will be a lot faster than the solution I was going to use but the Samsung 840 does not look to be a terribly impressive performer when it comes to write speeds. The endurance of the drive seems to also be a little up in the air given the different technology and short time on the market. Thanks for any thoughts.
 
Last edited:
Well first and foremost, it is 840, not 640. ;).

Second, the performance of the 840 matches its price. If you want more, it will cost you a lot more money.
 
Doh! I fixed it. There was a gap between reading the articles I was looking at and posting that. I have too much information going in and not enough slots to store it in. Anyway, the comparisons I am doing seems to put the write performance of the 840 below its competition in its capacity category but high in comparison to lower capacity performance drives. Just from the standpoint of the numbers it looks a lot better in comparison to the 128GB 840 Pro than it does next to the 512GB 480 Pro.

Here is one review I looked at: http://www.tweaktown.com/reviews/5072/samsung_840_500gb_ssd_review/index.html

Unfortunately the pros don't really do anything for me as I am putting this into a desktop. The write performance shown in the benchmarks there look pretty sad next to the Sandisk which is almost the same price. But I would have been very wary to buy that drive given the user reviews.

I don't think this is going to be much of an issue. The writes are still significantly faster than my current RAID0 and the read performance appears to be very good. I would just caution anyone crazy enough to reach this point in the thread that deciding which SSD to buy takes a little more attention than deciding which HDD to buy. Performance and longevity vary a lot more wildly with SSD.
 
I had no issues putting the machine together. Two things I expressed concern about that I can go ahead and lay to rest...

The PSU and case combination were not an issue. I got slightly creative with the CPU cable to run it behind the back panel (it wouldn't make it going through the grommet holes) but that wasn't a big deal. I was able to run all the wiring through the back. The build came out very clean and I had plenty of room left over when it was done. Speaking of the PSU... the machine has been busy for a few hours compiling Gentoo packages and the PSU fan has not kicked on once (I did turn off hybrid mode to make sure it is working.)

The second thing is the concern I expressed about the quality of the Gigabyte board and complaints about the PCI-E housing separating from the board. Either those people got defective boards or they did something wrong. The quality of the board is excellent. I looked very closely at all of the PCI ports and, believing that they appeared to be well attached, I gave each a bit of a tug. There were no signs that they would pull away from the pins. I also think complaints about the 3D BIOS are largely overblown. It's pretty cool and you can always go into the advanced mode if you want to get out of that interface or want to access more options. It may have been more cumbersome in the past but it doesn't seem like a big deal to me in its current state.

I will come back when my Gentoo is closer to complete with a final update on things. So far I'm very happy with how this has gone.
 
Well my computer boots almost before I can look up to see it. That is pretty awesome. I don't have a desktop compiled yet so I haven't been able to get much of a feel for it. I'm not running the Gentoo compiles on the SSD. Considering that all of the compilation is done in a temporary area that is deleted immediately upon completion, and the frequency and quantity of updates, I decided it involved too many temporary writes to handle that on the SSD. I actually created the whole /var partition on the RAID1 spinners and that includes the Gentoo package compilation path. That is also where logging takes place and the spool is in there so that whole partition is best left on the HDD's. Once the packages are compiled and installed they are on the SSD. So all of the OS's bins will be on the SSD. I will be able to get a feel for it once I get the desktop up and running. My personal software development area will be on the SSD so those compiles should benefit greatly from it.

I did already run some benchmarks (that can be launched on the shell without a desktop) on my old system and the new system. The numbers say the difference is huge. I'll get the numbers together sometime later today.
 
I second that go for the Gigabyte GA-Z77X-UD5H its a better quality board than anything Asrock puts out, Personally I only use Asus and Gigabyte boards, the Asrock is over priced and disappointing quality.

Good choice on power supply. Above poster is right the HX650 is seasonic unit otherwise go with an AX series unit from corsair or simply choose a seasonic brand, simply the best units out there.
 
I was going to suggest something like the Corsair H60, but since you don't like water cooling, I won't :D. I'm biased about noise - indeed I think I have a bit of a reputation here about it :) - so I'll suggest you take a good look at the Nofan, unless you have a cat who likes sleeping on top of your computer case. The TDP of the 3770 is within the Nofan's limits; I dropped to the S to play it safe. I have the same motherboard as the one you're getting, and your graphics card will work at full speed in the x8 slot - do make sure you get one with a blower fan if you do go for a Nofan.

a good way to avoid noise is good fans, such fans that have hydrowave bearing or fluid dynamic bearings will last alot longer and very quiet, cougar fans are very good and quiet less than 18 db at 60 to 70 cfm which gives you good airflow,
 
Well it took me longer to get to this than I said it would but better late than never I suppose.

First I will share a couple of images of the finished product. Click them to see large versions.



You can see from this picture that the case, PSU, and mobo combo worked out really well. Notice the CPU power coming from the top instead of through the grommets. It wouldn't reach going through the grommets and this was cleaner anyway so no big deal. I think the whole thing came out very clean. There might be some purists out there that would hide the wires even better but this is to my satisfaction. Also notice that the CPU fan wire is really close to the fan. That was the way it came out of the box. When I turned on the machine the first time I heard a nasty sound and that wire was the cause. I had to push it away from the fan all the way around.



Here is the system in its habitat.



This is actual lighting.

With that out of the way let me share a couple of numbers. This is about to get downright geeky and long winded. I suspect most will fall off along the way but here goes anyway. If you do decide to skip through consider reading the last four paragraphs.

As I stated in the OP my desktop runs Linux so these won't be the usual benchmarks that you are familiar with. I think they provide enough information though especially since the main thing that took shape through this thread was my acceptance of a future without spinning drives. Of course these things always happen in baby steps so I have a hybrid filesystem with some of it utilizing SSD and others contained on a RAID1 mirror managed by mdadm.

So here is how my partition table looks:

Code:
#df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       458G   67G  368G  16% /
devtmpfs        7.9G     0  7.9G   0% /dev
tmpfs           7.9G  848K  7.9G   1% /run
/dev/sda2       504M   56M  424M  12% /boot
/dev/md127      9.9G  1.2G  8.2G  13% /var
/dev/md126      2.7T  1.5T  1.2T  57% /sgraid1
tmpfs           8.0G   92K  8.0G   1% /tmp
/dev/sda1       511M  132K  511M   1% /boot/efi

To help make sense of this:
sda is the SSD.
sdb and sdc are the HDD's but they are mounted in RAID as md#.
tmpfs are ramdisk meaning they exist in the system memory as opposed to a drive.

I have 3 partitions on my SSD: the root ('/') partition, the /boot partition and the special efi partition which loads the boot loader into UEFI.

The raid is partitioned 3 ways: /var (this is where system logging is done as well as compiles of the packages that are built during installs and updates), /sgraid1 (this is where the automatic SSD nightly incremental backup is stored. It is also where directories can be created for symlinks into other parts of the system for things that don't need to be on the SSD such as download or media folders.), as well as a swap partition (pagefile in Windows lingo) on each HDD which can't be seen in the table.

Finally I use ramdisk for non-persistent data such as /tmp. These areas will be empty upon reboot for obvious reasons. Using ramdisk instead of hdd space makes /tmp very fast and avoids senseless writes to disk.

So this is working out very nicely. I have been using symlinks for a number of directories that I want on the HDD. I also went into Steam and added a second library folder so when I install a game I can choose whether I want to install it into the SSD path or the HDD path. This has all worked fairly seamlessly. I think I can make it even nicer by making a couple of scripts that will allow me to easily create a directory on the HDD, move a directories contents from one to the new directory, and then symlink it in a single command as well as another one to do the same for a copy operation as well as some other tasks. This will make moving things from one drive to the other pretty much as seamless as navigating them.

Now for some benchmarks. I'm going to start with drives and then move to something CPU intensive.

Let me remind everyone of my old specs:

Asus M3A79-T Deluxe
AMD Phenom II X4 975 @ 3.6GHz
8GB G.Skill 1066 5-5-5-15
XFX GeForce 9800GTX+ BE
2 x Western Digital 1TB 7200RPM 64MB Cache = 2TB RAID0 Array

I am going to label that system PhenomII. All tests on this system were performed at the stock clock of 3.6GHz.

The new system is here. I will label that Core-i7. All tests on this system were performed at the stock performance clock of 3.9GHz.

The first test is hdparm which will measure cached reads and buffered disk reads. The cached reads do not involve disc access and reflect the performance of the processor and memory. The second value provides the measured performance of reading from a disk.

First the performance of a single WD 1TB drive on the PhenomII system:

Code:
# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   7688 MB in  2.00 seconds = [B]3844.64 MB/sec[/B]
 Timing buffered disk reads: 376 MB in  3.01 seconds = [B]124.84 MB/sec[/B]

So the result is a disk read speed of 125MB/sec and cached read speed of 3,845MB/sec.

Now the performance of both WD 1TB drives in RAID0 (striping) on the PhenomII system:

Code:
# hdparm -tT /dev/md3 

/dev/md3:
 Timing cached reads:   7714 MB in  2.00 seconds = [B]3857.70 MB/sec[/B]
 Timing buffered disk reads: 694 MB in  3.01 seconds = [B]230.88 MB/sec[/B]

These results are not terribly surprising. The cached read is identical but the disk read is almost doubled to 230MB/sec.

Moving on to the Core-i7, this test is of a single Seagate 3TB HDD:

Code:
# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   28832 MB in  2.00 seconds = [B]14431.64 MB/sec[/B]
 Timing buffered disk reads: 492 MB in  3.01 seconds = [B]163.59 MB/sec[/B]

The first thing that should jump out immediately is the enormous difference in cached read performance at 14,431MB/sec. Obviously the system performance is impressive compared to the PhenomII system. The single drive read performance is also an improvement over the WD 1TB at 164MB/sec.

This test is the 2 Seagate 3TB HDD's in RAID1 (mirroring) on the Core-i7 system:

Code:
# hdparm -tT /dev/md127 

/dev/md127:
 Timing cached reads:   28740 MB in  2.00 seconds = [B]14385.74 MB/sec[/B]
 Timing buffered disk reads: 500 MB in  3.01 seconds = [B]166.12 MB/sec[/B]

The difference here is negligible. The RAID is providing no read benefit over single-drive performance.

The final hdparm test is the Samsung 840 500GB SSD in the Core-i7 system:

Code:
# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   29292 MB in  2.00 seconds = [B]14664.08 MB/sec[/B]
 Timing buffered disk reads: 1558 MB in  3.00 seconds = [B]519.18 MB/sec[/B]

The buffered disk reads come pretty close to the rated read speed (540MB/s) for the Samsung 840 500GB at 520MB/sec. That is more than twice the speed of the WD RAID0.

Now I'm going to test reading and writing with 'dd'. When running this test I perform 4 steps. First I write a 1GB file to disk, then I drop the cache, read from the disk, then finally perform a cached read. This test exercises the disk more than 'hdparm'. I will not be testing single-drive HDD performance here due to the fact that they are partitioned in RAID and this test actually writes a file to a partition so testing them in isolation is not possible.

2 x WD 1TB HDD in RAID0 in PhenomII system:

Code:
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB) copied, 5.39116 s, [B]199 MB/s[/B]
# echo 3 > /proc/sys/vm/drop_caches 
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 4.52663 s, [B]237 MB/s[/B]
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 0.296205 s, [B]3.6 GB/s[/B]

So the first result shown is the write performance which was measured at 199MB/s. The second value is the uncached read performance at 237MB/s. The third value is the cached read performance at 3.6GB/s. The numbers match up closely with 'hdparm'.

2 x Seagate 3TB HDD in RAID1 (mirroring) in Core-i7 system:

Code:
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB) copied, 7.829 s, [B]137 MB/s[/B]
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 5.84933 s, [B]184 MB/s[/B]
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 0.131753 s, [B]8.1 GB/s[/B]

Here you can see that the RAID0 beat the RAID1 in writes by quite a bit. That is obviously the expected result. The cached read performance is more than double the PhenomII system. The read speed is interesting as the performance improved over the 'hdparm' test. This is likely due to file size as the drive had time to reach a higher performance level.

The final 'dd' test is the Samsung 840 SSD in the Core-i7 machine:

Code:
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB) copied, 3.31928 s, [B]323 MB/s[/B]
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 2.01309 s, [B]533 MB/s[/B]
dmwoodlx mwoodj # dd if=tempfile of=/dev/null bs=1M count=1024
# dd if=tempfile of=/dev/null bs=1M count=1024
1073741824 bytes (1.1 GB) copied, 0.129722 s, [B]8.3 GB/s[/B]

When it comes to the read it doesn't get much clearer than this. It came in just under the rated performance at 533MB/sec. The HDD's are not even in the same league. When it comes to the writes the lower performance of the Samsung TLC NAND drive, in comparison to MLC, can be seen at 323MB/s. It is better than lower capacity Samsung 840 drives and it still smokes the HDD's so there is nothing to complain about there for me.

Now I will wrap up the benchmarks with a real world test of the system. One of the open-source projects I develop on is called Odamex. It contains about 150,000 lines of C++ code. I compiled it on each system and on the Core-i7 system I compiled it on both the HDD and SSD. For this test I compiled it multiple times with the first time utilizing a single thread and the proceeding compiles using multiple threads. The time is displayed in three parts: real time, user time, system time. The first value is the actual (elapsed) time it took to compile, the second time is the amount of user cpu time spent, and the third is the amount of system (kernel) cpu time. The second and third values should come close to, but not surpass, the first time in single-threaded mode but they will be higher when running multiple threads. The compile is done with cmake, make, and the GNU GCC compiler.

Here are the compile times for the PhenomII system:

Code:
Odamex build with 1 thread

real	[B]2m12.758s[/B]
user	1m58.724s
sys	0m7.967s

Odamex Build with 4 threads (CPU maxed)

real	[B]0m39.512s[/B]
user	2m3.182s
sys	0m8.667s

Using all 4 cores significantly reduces compile time. When you are working on code issuing your make with threads ('make -j4' for 4 threads) is going to save a ton of time. Compiling 150,000 lines of C++ in 40 seconds is fast. 10 years ago it was unimaginable that my desktop would be capable of that.

Here are the compile times for the Core-i7 system on the HDD RAID1:

Code:
Odamex Build with 1 thread

real	[B]1m42.410s[/B]
user	1m34.157s
sys	0m5.364s

Odamex Build with 4 threads

real	[B]0m26.826s[/B]
user	1m37.942s
sys	0m5.383s

Odamex Build with 8 threads (CPU maxed)

real	[B]0m22.974s[/B]
user	2m44.294s
sys	0m8.464s

The single process performance of the Intel Core i7 3770 is a big improvement over the PhenomII. There is also a noticeable improvement in multi-threaded compile time. The difference between 4 threads and 8 shows that hyper-threading does provide some benefit when performing the compile.

Now for the final test, compiling on the Core-i7 with the Samsung 840 SSD:

Code:
Odamex Build with 1 thread

real	[B]1m44.632s[/B]
user	1m37.809s
sys	0m5.127s

Odamex Build with 4 threads

real	[B]0m26.567s[/B]
user	1m38.397s
sys	0m5.360s

Odamex Build with 8 threads (CPU maxed)

real	[B]0m22.881s[/B]
user	2m45.374s
sys	0m8.286s

This result completely validates my decision to place /var on the HDD for a Gentoo install. The SSD provided no benefit when it came to compiling a large amount of code. The fact is the operation is a CPU intensive one and source files are usually not terribly big. Whether you are compiling from an HDD or the SSD it doesn't take much data to keep the CPU fed during this task. In every single one of the compile tests I performed the threads consumed 100% of the core they were assigned.

Had I placed /var on the SSD my Gentoo compiles would be writing gigs of temporary data to the SSD a couple of times a month for no benefit at all. I've been watching Gentoo compile packages from its ports system for a long time. These compiles are fast even though they are on the HDD. The best thing I did for my compile times was build a new system with a faster processor and faster memory. BUT...

The best thing I did for general use of my system was install it on an SSD. Everything feels faster. The system boots in a matter of seconds. When I log in my desktop appears practically instantly. I like to start a bunch of applications when I log in (shells, browser, e-mail client, MUD client, music player, mumble, Steam) and they all start the moment I click them where some of those took 20 seconds or more to start on my old system. When I open the file browser it is instant. When I start Serious Sam 3, Guild Wars 2, or any number of other games the load time is negligible. I will have to wait and see what the endurance looks like but at this point I am very thankful that the forum members here talked me out of wasting money on a 4 drive VelociRaptor RAID10 configuration. You guys saved my ass on that one!

I am finally starting to settle into my setup. It takes a while because Gentoo is all about customization and I do a ton of it. Luckily the major benefit of Gentoo is the nature of rolling release and I will not have to reinstall it again unless something catastrophic happens. I am really enjoying the system. I always anticipate that something very annoying is going to go wrong when I build a system and that just didn't happen this time. Every step of the way the hardware cooperated with me. I owe a few here a beer (once again), especially Danny Bui. Thanks for all of the help!
 
Last edited:
And you were hesistent about getting that SSD :)

Glad to see the system is working out well for you! And thanks for the pics!
 
Back
Top