Vega's Heavyweight display and computer; edition 2012

What I have seen online since SB came out is the evga boards arn't all that great. I would go with ASUS. Last 3 customer builds I have built have been Asus and really liked them. All OCed well and were rock stable. When Ivy comes out I"ll be picking up a Asus MB.
 
Sweet, got a Rampage IV Extreme and a nice clocking 3960X inbound and the other three GTX 680's in the mail. Now I just need to find some of that 2666 MHz DDR3 4x 4GB. Or do you guys think 2400MHz will suffice?

I might have to re-think my cooling loop on the CPU side to give it more breathing room now. ;)
 
Sweet, got a Rampage IV Extreme and a nice clocking 3960X inbound and the other three GTX 680's in the mail. Now I just need to find some of that 2666 MHz DDR3 4x 4GB. Or do you guys think 2400MHz will suffice?

I might have to re-think my cooling loop on the CPU side to give it more breathing room now. ;)

Considering you're already running quad channel, I'd say 2400mhz is more than enough :|, I think your machine will have close to the bandwidth of a mainstream GPU :p
 
Considering you're already running quad channel, I'd say 2400mhz is more than enough :|, I think your machine will have close to the bandwidth of a mainstream GPU :p

I went with Team Group 4x 4GB 2400MHz 9-10-10-28 which seems pretty good.
 
Well I broke down and have four of these inbound:




I can't help it, EK water blocks draw me in like crack to a crack-whore! :D

I am going to hold off on the Rampage IV Extreme motherboard VRM and chip-set liquid blocks and the four RAM blocks until I am sure that is the route I want to take for my permanent setup.
 
niiiiiiiiiiiiiiiiiiiiiiiiice.

I need to finish my stupid bathrooms and put the finishing touches on my kitchen reno so I have time to install all my WC gear...

this thread is making me antsy
 
Ya, I should have a nice clocking 2700k for sale here once my 3960X/3770K testing is done. ;)
 
I don't know man, looking at the last pic in your OP, I think you'd found perfection already. Will you be content with the display quality of a CRT array?
 
I don't know man, looking at the last pic in your OP, I think you'd found perfection already. Will you be content with the display quality of a CRT array?

Don't get me wrong, the 3x1 and 5x1 portrait 120Hz were pretty sweet. But for gaming, CRT's crystal clear motion is king. Add in a virtually seamless gap between images, a 45"+ image at 10-megapixels and the depth effect added from the Fresnel lenses, and it is a very unique and awesome gaming experience that my other setups cannot equal.

It's just one of things that you would have to see for yourself to truly appreciate. ;)
 
For Vram keep in mind when Skyrim came out and when BF3 came out.
More Demanding games will come soon.
 
I used a Fresnel lens for a while on a racing simulator setup and the depth of field effect was way more pronounced than I expected.

I remember being impressed every-time I sat down to race. I can't imagine how it must feel to have that resolution + that size of image in front of you.

I watched the skyrim youtube video 5 times with my nose in my screen... but it didn't do much lol
 
Well I broke down and have four of these inbound:




I can't help it, EK water blocks draw me in like crack to a crack-whore! :D

I am going to hold off on the Rampage IV Extreme motherboard VRM and chip-set liquid blocks and the four RAM blocks until I am sure that is the route I want to take for my permanent setup.
LOL! These guys are about 10 minutes from my house...
 
I used a Fresnel lens for a while on a racing simulator setup and the depth of field effect was way more pronounced than I expected.

I remember being impressed every-time I sat down to race. I can't imagine how it must feel to have that resolution + that size of image in front of you.

I watched the skyrim youtube video 5 times with my nose in my screen... but it didn't do much lol

Ya, the Fresnel's have negative aspects but they also have quite a few positives. I enjoy using them.

LOL! These guys are about 10 minutes from my house...

Sweet. Good company.
 
Working on the ambient side of the loop today. Setting up to test the 3960X and RIVE under water.


SANY0014-3.jpg




Mocking up components to test routing of liquid lines. 1/2" ID - 3/4" OD Norprene tubing is hard to bend and fit in tight places.

SANY0015-2.jpg




The Team Group 2400 9-11-11-28 RAM came in (4x 4GB).

SANY0017-2.jpg




Working on some of the supply/return valve systems.

SANY0018-1.jpg




Made a custom stand out of some old Ikea speaker stands.

SANY0011-4.jpg




Reservoir up top with a couple silver kill coils.

SANY0012-3.jpg




Liquid line routing. The open ended valves that are shut will attach to the Geo-thermal section of the cooling loop.

SANY0019-1.jpg




Testing out the loop and checking for leaks.

SANY0020-2.jpg




Getting rid of air in the system as been a huge PITA. I am going to have to come up with some sort of custom pump/system to force water through the system and flush all the air out under pressure. The Iwaki RD-30 is a beast of a pump in a closed system but if there is some air in the lines it has a hard time getting going. The system already used 1 gallon of distilled water and I ran out so I wasn't able to fire the rig up. Tomorrow is another day.
 
I am thinking of re-arranging the cooling loop so that:

Branch #1 = CPU > X79 > VRM/MOSFET

Branch #2 = 680 > 680 > 2x RAM Sticks

Branch #3 = 680 > 680 > 2x RAM Sticks

I think the balance between those would be fairly close. The VRM/MOSFET coolers are really low restriction and would pretty much balance the resistance of the 2x RAM sticks. So essentially it will be one CPU block versus two GPU blocks to balance resistance. Anyone think the resistance balance would be way off on the above configuration? (Doesn't need to be perfect)
 
Pretty insane how the monitors are almost edge to edge, but at the end of the day i would rather have a 4k projector for $10k.

The human eyes have depth perception and we have the ability to sense scale very well, so even if you sit close enough to a shoebox sized screen that it takes up the same amount of vision as a wall 12 feet away from you, it doesnt mentally equate to the same experience.
 
Pretty insane how the monitors are almost edge to edge, but at the end of the day i would rather have a 4k projector for $10k.

The human eyes have depth perception and we have the ability to sense scale very well, so even if you sit close enough to a shoebox sized screen that it takes up the same amount of vision as a wall 12 feet away from you, it doesnt mentally equate to the same experience.

Naw, projectors have a whole slew of issues. From the obvious: need a huge screen and room, playing in the dark, noise from the projector fan over your head, to the not so obvious: 4K displays currently only upconvert 1080P signals, max out at 60Hz, have not-so-good motion, and high input lag. Up-scaling = no thanks.

http://www.trustedreviews.com/sony-vpl-vw1000es_Projector_review
 
SANY0017-2.jpg



modded Lian Li PC-T60? Is that going to be the permanent home for it or just for testing purposes?
 
Naw, projectors have a whole slew of issues. From the obvious: need a huge screen and room, playing in the dark, noise from the projector fan over your head, to the not so obvious: 4K displays currently only upconvert 1080P signals, max out at 60Hz, have not-so-good motion, and high input lag. Up-scaling = no thanks.

http://www.trustedreviews.com/sony-vpl-vw1000es_Projector_review

I have an epson 1080p projector and the fan isnt an issue even despite being right above my head (its a really silent fan) and the whole point of it IS the ginormous screen.

Youre right though, i thought that sony was able to take a native 4k signal, pretty lame that it cant.
 
Will there be an equally awesome sound system to go with this epic build?
 
I have an epson 1080p projector and the fan isnt an issue even despite being right above my head (its a really silent fan) and the whole point of it IS the ginormous screen.

Youre right though, i thought that sony was able to take a native 4k signal, pretty lame that it cant.

Currently the only downside to a projector setup for me is the best looking displays seem to have the highest input lag. I have a JVC-RS45 currently and its superior for shootin' movies but it has too high of input lag to make FPS gaming any fun.

They certainty have 4k projectors out now that cost tens of thousands. Realistically, the problem is we need projectors with dual link DVI (or display port) that can accept a 120hz input, have a great contrast ratio and dont cost over $2k.

I have two projection screens, a high contrast grey screen thats in the living room which works in moderate to low lighting and then a more whitish reference screen in my bat cave theater.

Its not for everyone, it takes a lot of money to get there, but if you're looking for the ultimate in simulation gaming on a 100"+ screen projection is where its at. One day we will have low input lag, high contrast, high resolution projectors that don't cost $10,000+ and that will be the day. For now I just use one projector for movies, one for simulations, and I play other games on 120hz PC monitors. As we move away from bulbs and shift towards LED / Laser light sources I hope this becomes an affordable reality in the next 5 years.
 
Got my RAID 0 setup (boy that was a nightmare on X79), Win 7 installed. Games are downloading. Got three GTX 680's now. This is why I love nVidia:

FW900Surround.jpg


Even something as complicated as running three CRT's in portrait is a snap. Install driver, hit configure Surround displays, bam - organize screens and your done. Even these early drivers work really well. Thankfully nVidia allows each card to use it's own RAMDAC for each FW900, something AMD cannot do.


Setup is kinda a mess while I install and test stuff:

SANY0001-17.jpg



The 3960X at stock setting under maximum-Intel Burn Test only reaches a max temp of 41 C on the cores using the ambient radiator. I used Thermaltake Chill Factor III this time around and it appears to be doing quite well.
 
nvidia is definately cathcing up in multimonitor displays, they win in some areas and lose in others

does it still require dual cards to do surround?
 
nvidia is definately cathcing up in multimonitor displays, they win in some areas and lose in others

does it still require dual cards to do surround?

My understanding is as of the 680, no. Of course that is assuming all LCDs -- when you throw in CRTs as Vega has things get more complicated.
 
aaah yeah forgot the CRT's make it more difficult, due to analog I assume?
 
Expect PCI-E 3.0 vs 2.0 tests, what is required to reach 2GB of VRAM and what happens when that VRAM is reached tests etc. So far in BF3 the results are pretty bad news once the memory reaches 2048MB! (Although it takes quite a bit to surpass the 2GB VRAM amount even at extremely high resolution and settings). More to follow...

I love this new nVIdia Surround. It keeps the desktop task-bar only on the center monitor and when I maximize windows it only maximizes on the center screen. Awesome features! With the simple registry edit, I've got all of the cards running at PCI-E 3.0. In the BIOS I can switch between 1.0/2.0/3.0 at will so this will make for some nice tests.

Who want's to bet there will be appreciable differences on my setup? ;)
 
Hopefully you can figure out what it takes for the 2gb limit to show in game. And figure out what nvidia does to manage it. Would be good to know.
 
Don't think you're going to see any difference between 2.0 and 3.0. 1.0 to 2.0 possibly, depending on how many lanes are devoted to each card, but I doubt 3.0 will make any difference. They don't come anywhere near saturating 16x PCIe 2.0.
 
Don't think you're going to see any difference between 2.0 and 3.0. 1.0 to 2.0 possibly, depending on how many lanes are devoted to each card, but I doubt 3.0 will make any difference. They don't come anywhere near saturating 16x PCIe 2.0.

There is no motherboard that has quad native 16x PCI-E 2.0. So if I run my 4-way SLI 680 setup, it will be 16x/8x/8x/8x. I haven't run any tests yet but I bet you all the tea in China I will see a performance increase when I bump those lanes from PCI-E 2.0 to 3.0 and they become virtual 32x/16x/16x/16x 2.0 lanes. ;)
 
Not much. 5% or less, probably a lot less. PCIe 2.0 8x has been shown to cause almost zero performance degradation with current hardware, and I doubt the 680s push /that/ much more data. I could easily be wrong, will be interesting to see.
 
My understanding is as of the 680, no. Of course that is assuming all LCDs -- when you throw in CRTs as Vega has things get more complicated.
While you don't need multiple cards to do surround, you HAVE to use multiple cards if you have them ( if you're running SLI + surround, you need to connect 1 display to card 2)

Expect PCI-E 3.0 vs 2.0 tests, what is required to reach 2GB of VRAM and what happens when that VRAM is reached tests etc. So far in BF3 the results are pretty bad news once the memory reaches 2048MB! (Although it takes quite a bit to surpass the 2GB VRAM amount even at extremely high resolution and settings). More to follow...

I love this new nVIdia Surround. It keeps the desktop task-bar only on the center monitor and when I maximize windows it only maximizes on the center screen. Awesome features! With the simple registry edit, I've got all of the cards running at PCI-E 3.0. In the BIOS I can switch between 1.0/2.0/3.0 at will so this will make for some nice tests.

Who want's to bet there will be appreciable differences on my setup? ;)

This should be interesting :) btw AMD also has the task bar settings, however NV has the maximize to 1 screen setting.
 
Well I tried the registry fix to try and get my GTX 680's running at PCI-E 3.0 with my X79 MB and in Surround mode Windows fails to boot. So the article was right that there can be issues using that registry key.

So hopefully nVidia properly enables 3.0 or I will have to go back to the launch drivers which I think were 300.83?
 
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation! :eek:

Test setup:

3960X @ 5.0 GHz (temp slow speed)
Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x
(4) EVGA GTX 680's running 1191MHz core, 3402 MHz Memory
nVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tes
GPU-Z 0.6.0

SANY0003-12.jpg



After PCI-E settings changed, confirmed with GPU-Z:

PCI-E20.gif


PCI-E30.gif



All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).


PCI-ETests.jpg



I kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.

The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.

Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0. ;)
 
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation! :eek:

Test setup:

3960X @ 5.0 GHz (temp slow speed)
Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x
(4) EVGA GTX 680's running 1191MHz core, 3402 MHz Memory
nVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tes
GPU-Z 0.6.0




After PCI-E settings changed, confirmed with GPU-Z:




All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).





I kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.

The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.

Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0. ;)

Holy shit....i would of said otherwise.

But....WOW that is a huge difference. Just curious did you get these kind of results with the 7970 and pci 2.0-3.0?

I cannot believe the difference. I would almost call bullshit, but proof is right there.....im blown away!
 
While the results are surely there, and I applaud your and your wallet for being able to prove such a feat, I would like to see the difference in 2-way SLI - something MUCH, MUCH, MUCH more attainable to an enthusiast than spending 2k on GPU's alone.
 
Back
Top