Vega Rumors

Tesla can't crunch the data fast if it can't fit... that's why oil/gas was reporting 10x faster speeds with AMD solutions last time I looked. That's the whole reason they have the SSG market underway.


P.s. I hope next generation they make expensive as fuck cards sold direct from Nvidia/AMD that are for mining and disable the compute functionality needed for gaming cards.
This shit is getting ridiculous.


Tesla has versions that have SSD's too, they just didn't put it on a single card.

Nah mining will still keep going, doesn't matter for them man, anyone that buys a card is still money for them.
 
Tesla has versions that have SSD's too, they just didn't put it on a single card.

Nah mining will still keep going, doesn't matter for them man, anyone that buys a card is still money for them.

Ahh wasn't aware.
But their gaming audience is getting pissed off. They need to separate the two and gouge you miners out directly, so the retailers don't get the cut.
 
Yeah and its wrong lol.

Problem is I think people that listen those types of articles really should take a step back and see what Vega really is (GCN vs Maxwell and Pascal), the efficiency difference is staggering and it all comes from the control silicon. To go to separate scheduler blocks like GCN would kill everything that is advantageous to nV right now, its occupancy and throughput give it a lot of advantageous that GCN just can't compete with then add in the power usage effeciency on top of that They won't go towards a "GCN" like architecture, they will loose 50% of their advantage.

Can anyone link this article please, there was no article it was forum members posted about it after reading nV's blog which was really generalized.

Its like taking what AMD stated about Vega a year ago and saying its the best thing since sliced bread but turning it around since its an nV blog.

https://devblogs.nvidia.com/parallelforall/inside-volta/



This small quote should be looked at as what the new schedulers are capable of. GCN can't do this. No GPU can do this right now.



Prior to that qoute Pascal SIMT, they talk about convergence and divergence. GCN also has to problem, not as sever as Pascal though.

Now if a person actually understood the blog they would know Volta is a quite a bit different than GCN

And it was summed up in that blog prior to all this too


NO GPU DOES THIS RIGHT NOW.

Whom ever started this rumor of GCN like just doesn't read the pertinent things.

To do that the scheduling block (which is the gigathread engine for nV) has to be way different than what they have now and way different to what is out there right now too.

GCN like all other currently unified pipelines schedules it threads at once, this is why understanding utilization and occupancy is so damn important for GCN, Maxwell, Pascal, etc. With Volta that will change to a large degree, I don't think it will be removed entirely though, but much less. There were HPC devs talking about Volta that stated how much different the programing model is, and then you had nV employees saying this is a much different architecture than before.

Now we can see there are pretty big differences in which the shader array at a high level (threads) is quite different. That tells us a bit about the lower levels, like instructions how it will be different. By getting finer granularity at a thread level, all those problems with static partitioning and dynamic partitioning of Maxwell and Pascal (respectively) will all disappear.

Where AMD decided to do their instruction assignment and thread assignment at a block level, nV with Volta is doing it at an ALU level, much higher granularity than what we see now even much higher than in GCN. GCN is stuck at a block level (one CU block).

Is this understandable?

Now for people that call others sheep and marketing forum what not, but don't back up a single thing of what they say or read things wrong, that is kinda Fucked up don't you think?
Come on, Nvidia GCN will be the version that is done right ;).
 
Ahh wasn't aware.
But their gaming audience is getting pissed off. They need to separate the two and gouge you miners out directly, so the retailers don't get the cut.
Or both just make more damn cards to saturate the market. I think Nvidia will catch up to the demand, just getting all the other parts to support probably a big issue. AMD? Maybe they can increase Rx 5xx series, not sure about the Vega series.
 
Or both just make more damn cards to saturate the market. I think Nvidia will catch up to the demand, just getting all the other parts to support probably a big issue. AMD? Maybe they can increase Rx 5xx series, not sure about the Vega series.

Saturation of the market kills mining, Nvidia is having the same problem - 1070s are still overpriced - they're only 50 dollars less than a damn 1080 here even with exchange rate and I've heard similar overseas.
But now V56 is the new mining hotness and we both know they won't make many of them, because it's not worth it to with their tiny margins, low MSRP and the gouging they make no money off. Sucks being an AMD leaning fan with mining shitting the bed.

Only solution they have is gimping drivers and separating the products, or AMD high end GPUs will become a thing of the past for most gamers. If you want to buy Nvidia you're pretty much stuck looking at 1080 or Ti. Everything below that is shitty value if not mining.
 
Saturation of the market kills mining, Nvidia is having the same problem - 1070s are still overpriced - they're only 50 dollars less than a damn 1080 here even with exchange rate and I've heard similar overseas.
But now V56 is the new mining hotness and we both know they won't make many of them, because it's not worth it to with their tiny margins, low MSRP and the gouging they make no money off. Sucks being an AMD leaning fan with mining shitting the bed.

Only solution they have is gimping drivers and separating the products, or AMD high end GPUs will become a thing of the past for most gamers. If you want to buy Nvidia you're pretty much stuck looking at 1080 or Ti. Everything below that is shitty value if not mining.
Sad but looks like the 1080 Ti is best bang per buck, except a number of 1060's have come down in price.
 
Sad but looks like the 1080 Ti is best bang per buck, except a number of 1060's have come down in price.

Great point, I also noticed same here looking just now, only a few cheaper 1060s, also some lower cost 580s suddenly. Most are high priced though so not sure if stock pre-order BS...
Ti is sadly the best option it appears, bang for back.


Welcome to 2017, where the highest end and best performing consumer grade card, is also best bang for buck....

Mining must be great for Nvidia/AMD margins on Ti/64, so I'm wondering if they'll even bother to do what I mentioned above.
That said I wouldn't put it past Nvidia to try, they have the cash spare to do it.
 
I'm new. A little help would be great. Here's my situation:

I don't want to mine, but I've been thinking about these mining cards. I noticed that some of the algorithms used for mining could be adapted to other purposes. I enjoy a bit of escapism on occasion and I thought it would be possible to use some of these mining devices to create 3 dimensional worlds and play games with them on my computer? Has anyone given any thought to this?




:(

Sigh. Damn miners.
 
I enjoy a bit of escapism on occasion and I thought it would be possible to use some of these mining devices to create 3 dimensional worlds and play games with them on my computer? Has anyone given any thought to this?

Closest thing is probably No Mans' Sky.. CPU driven though.
 
Come on, Nvidia GCN will be the version that is done right ;).


I can see how he got confused lol, he is thinking scheduling is done all at one level, the instruction level, it is not. To get truly independent thread scheduling it will automatically create instruction level Independence with current SIMD architectures. Now this will increase transistor counts like crazy. More cache, more registers, all that good stuff will be needed.

Now at an instruction level Volta will be able to issue instructions to any of the ALU's it has from any other ALU. GCN doesn't function that way, data can only by shared by neighboring CU's. Maxwell/Pascal could only share data within the SMX with out any penalties, but it could share data across the entire array, but the latency penalty is quite large.

There is an enormous difference in approach from how AMD currently designed GCN vs how nV has been thinking all along. I'm sure Navi will go down this path too, only makes sense to do this, but only if they are willing to use the transistors to do it. If they want to solely focus on gaming, I don't think AMD will do it. Really depends on if Navi is a large chip or a smaller one.......

Interesting enough this approach of thread level independence, will be the foundation of multiple GPU's with transparency. Still have the problem of memory pooling to solve which will cost even more transistors on control chip or silicon that delegates information to the independent arrays. And this is why I think Navi will go this route too.
 
Ahh wasn't aware.
But their gaming audience is getting pissed off. They need to separate the two and gouge you miners out directly, so the retailers don't get the cut.


Yeah the unified memory Tesla uses can use a regular SDD to do the same job, now yeah there will be some latency increase but I'm pretty sure that is hidden via drivers.

Just can't do it, miners will still get gaming cards lol. The only way to do it is make an entirely separate GPU/card that is great at mining but can't game, and make gaming cards that are great at gaming and can't mine lol.
 
I'm new. A little help would be great. Here's my situation:

I don't want to mine, but I've been thinking about these mining cards. I noticed that some of the algorithms used for mining could be adapted to other purposes. I enjoy a bit of escapism on occasion and I thought it would be possible to use some of these mining devices to create 3 dimensional worlds and play games with them on my computer? Has anyone given any thought to this?




:(

Sigh. Damn miners.


haven't thought of them that way, what algo's do you have in mind?
 
Sad but looks like the 1080 Ti is best bang per buck, except a number of 1060's have come down in price.
Best Buy has 1080 FE fro 479, so you can get a 1080 pretty easily. I see them there all the time. Frys usually has them as well.
 
If I bought a GTX 1080 I would put it under water also as I have a custom loop. If Pioneer starting selling video cards and I bought one I would figure out how to put it under water. ;)

The water block to me is a part of the case cooling and not video card cost. The only GTX 1080 Ti I was considering were the FE and Aorus that came with a water block already installed for $849. Everything else was fodder.

So, it's all about aesthetic rather than performance.
 
Best Buy has 1080 FE fro 479, so you can get a 1080 pretty easily. I see them there all the time. Frys usually has them as well.
1080FE to 1080 Ti AIB is about 35% performance increase I do believe so at $479 it would be slightly better performance/$ but not by much. That is a great price for a 1080 by the way.
 
1080FE to 1080 Ti AIB is about 35% performance increase I do believe so at $479 it would be slightly better performance/$ but not by much. That is a great price for a 1080 by the way.
I rarely ever see them above $539 USD, in stores mind you.
 
So, it's all about aesthetic rather than performance.
So not wanting to defend Vega, but if the person allready has an adaptive sync monitor there is a reason to get vega. If they where to buy a 1080ti and a gsync monitor it would cost double what they payed for the vega 64.
 
I can see how he got confused lol, he is thinking scheduling is done all at one level, the instruction level, it is not. To get truly independent thread scheduling it will automatically create instruction level Independence with current SIMD architectures. Now this will increase transistor counts like crazy. More cache, more registers, all that good stuff will be needed.

Now at an instruction level Volta will be able to issue instructions to any of the ALU's it has from any other ALU. GCN doesn't function that way, data can only by shared by neighboring CU's. Maxwell/Pascal could only share data within the SMX with out any penalties, but it could share data across the entire array, but the latency penalty is quite large.

There is an enormous difference in approach from how AMD currently designed GCN vs how nV has been thinking all along. I'm sure Navi will go down this path too, only makes sense to do this, but only if they are willing to use the transistors to do it. If they want to solely focus on gaming, I don't think AMD will do it. Really depends on if Navi is a large chip or a smaller one.......

Interesting enough this approach of thread level independence, will be the foundation of multiple GPU's with transparency. Still have the problem of memory pooling to solve which will cost even more transistors on control chip or silicon that delegates information to the independent arrays. And this is why I think Navi will go this route too.
I do believe RTG whole plan is to do multiple smaller dies so they will probably keep GCN, HBM of some form, probably a rather large interposer like Threadripper or Epyc and pray for the best while throwing a Hail Marry from the foot.
 
I do believe RTG whole plan is to do multiple smaller dies so they will probably keep GCN, HBM of some form, probably a rather larger interposer like Threadripper or Epyc and pray for the best while throwing a Hail Marry from the foot.

I think that's already been debunked. Although I would pay good money (thousands and thousands) for a multi-die fastest GPU IF it appears and acts like a single die GPU. Vendor doesn't matter for me. AMD would have to fix their VR performance though.

I refuse to use mGPU as is right now.
 
  • Like
Reactions: N4CR
like this
I do believe RTG whole plan is to do multiple smaller dies so they will probably keep GCN, HBM of some form, probably a rather large interposer like Threadripper or Epyc and pray for the best while throwing a Hail Marry from the foot.


That is the first step towards it, still a lot to be done to really achieve it though at least for gaming. So Navi won't be capable of doing it, Volta won't be capable of doing it, the next generation chips after possibly have to see though.
 
That is the first step towards it, still a lot to be done to really achieve it though at least for gaming. So Navi won't be capable of doing it, Volta won't be capable of doing it, the next generation chips after possibly have to see though.
So we may see a Vega refresh with a later Navi in 2019 with 7nm process. RTG will need the smaller node to incorporate multiple dies on one interposer with HBM is my thinking. Stacking GPU's like HBM would be interesting if some form of heat removal from the distant future becomes available. Vapor chambers between the dies :).
 
Last edited:
So we may see a Vega refresh with a later Navi in 2019 with 7nm process. RTG will need the smaller node to incorporate multiple dies on one interposer with HBM is my thinking. Stacking GPU's like HBM would be interesting if some form of heat removal from the distant future becomes available. Vapor chambers between the dies :).


Well Vega 20 will come but won't be for gaming, Navi end of this year which at this point we really don't have much info on other than the roadmap stuff and that is all bullet points.....

I don't think if stacking GPU's will ever work lol, would be interesting though. HBM is quite expensive, I would expect that cost to reflect even more on stacked GPU's.
 
So we may see a Vega refresh with a later Navi in 2019 with 7nm process. RTG will need the smaller node to incorporate multiple dies on one interposer with HBM is my thinking. Stacking GPU's like HBM would be interesting if some form of heat removal from the distant future becomes available. Vapor chambers between the dies :).

Stacked GPUs? Stacked *AMD* GPUs? Lol. Fire and Blood.
 
So, it's all about aesthetic rather than performance.

I care zero about aesthetics as I have all types of fittings, hoses, radiator brands, etc in my loop. That's a common misconception by people that haven't done one before. Have you ever installed water cooling before or at least read what it does? It lowers the temperature of your components as a whole. So say normally an Nvidia card is running at 80c. Under a water loop it would be at 28c - 35c tops idling and on the worst days somewhere in the 40c - 45c range under full load if you have the worst case in the world. Since the card is running cooler it uses less electricity to do the same amount of work. So now you can undervolt the card and maintain max clocks 24/7. Also it is dumping less heat into the case. So your motherboard, hard drives, CPU, etc are running cooler.

Also you can make your build completely silent!

Water cooling can be as cheap or as expensive as you want it to be. I bought an used loop here on [H]ardocp a couple of years ago and have been expanding and changing it every since.

Those blocks I was linking earlier do look really nice and are aesthetically pleasing to the eye. But to save money you should do something generic like this.
http://koolance.com/index.php?route=product/category&path=29_148_46


gpu-230_p1-700x700.jpg
gpu-230_p3-700x700.jpg


Then you can reuse these on every video card that you purchase. So you spend whatever the cost of these one time and that's it. Upgrade time next year? Reuse the same block over and over.

The pragmatist in me says to do this and be done with it. As far as cooling the memory, VRMs, etc on the card, you just slap some generic VGA heat sinks on them with thermal tape. Hell ask EVGA how important thermal tape and thermal pads are. razor1 can tell you about that debacle.

21vezmWFaQL.jpg


If you want it more aesthetically pleasing you can get copper heat sinks and all types of accessories.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I care zero about aesthetics as I have all types of fittings, hoses, radiator brands, etc in my loop. That's a common misconception by people that haven't done one before. Have you ever installed water cooling before or at least read what it does? It lowers the temperature of your components as a whole. So say normally an Nvidia card is running at 80c. Under a water loop it would be at 28c - 35c tops idling and on the worst days somewhere in the 40c - 45c range under full load if you have the worst case in the world. Since the card is running cooler it uses less electricity to do the same amount of work. So now you can undervolt the card and maintain max clocks 24/7. Also it is dumping less heat into the case. So your motherboard, hard drives, CPU, etc are running cooler.

Also you can make your build completely silent!

Water cooling can be as cheap or as expensive as you want it to be. I bought an used loop here on [H]ardocp a couple of years ago and have been expanding and changing it every since.

Those blocks I was linking earlier do look really nice and are aesthetically pleasing to the eye. But to save money you should do something generic like this.
http://koolance.com/index.php?route=product/category&path=29_148_46


gpu-230_p1-700x700.jpg
gpu-230_p3-700x700.jpg


Then you can reuse these on every video card that you purchase. So you spend whatever the cost of these one time and that's it. Upgrade time next year? Reuse the same block over and over.

The pragmatist in me says to do this and be done with it. As far as cooling the memory, VRMs, etc on the card, you just slap some generic VGA heat sinks on them with thermal tape. Hell ask EVGA how important thermal tape and thermal pads are. razor1 can tell you about that debacle.

21vezmWFaQL.jpg


If you want it more aesthetically pleasing you can get copper heat sinks and all types of accessories.


Personally I don't use water cooling cause I change out my components too fast, but man all the stuff you just showed in the past few posts will make one hell of a sexy system! Some people like the looks, if I had the time and willingness to do it, I would go all the way and make into a piece of art :)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I read a paper concerning microscopic cooling pipes embedded in the silicon that basically operate passively because at that scale the fluid can move purely by convection etc, can't find the paper anymore but i found quite a few references to this

https://www.extremetech.com/extreme...dded-water-droplets-could-cool-next-gen-chips
https://www.electronicsweekly.com/news/research-news/direct-water-cooling-for-fpga-die-2016-01/

Another key advantage of liquid cooling: It could be embedded between 3-D–stacked high-power chips. To show this potential, Bakir’s team embedded interconnects into the cooling channels that could be used to link stacked chips.

Edit:

best one so far

https://www.zurich.ibm.com/st/electronicpackaging/cooling.html

Volumetric heat fluxes of up to 3.9 kW cm−3 were demonstrated experimentally for a three-tier stack with simple straight microchannels [4]. More sophisticated fluid cavities are under investigation and have the potential to improve heat removal performance. Accordingly, Figure 6a shows a sketch of an interlayer cooling roadmap considering a step-wise introduction of
 
I don't know if they will use this just depends on what is cost effective for the performance.

Interesting concepts by sounds pretty expensive to me.
 
I read a paper concerning microscopic cooling pipes embedded in the silicon that basically operate passively because at that scale the fluid can move purely by convection etc, can't find the paper anymore but i found quite a few references to this

https://www.extremetech.com/extreme...dded-water-droplets-could-cool-next-gen-chips
https://www.electronicsweekly.com/news/research-news/direct-water-cooling-for-fpga-die-2016-01/



Edit:

best one so far

https://www.zurich.ibm.com/st/electronicpackaging/cooling.html

Sounds like something AMD would maul.
 
I care zero about aesthetics as I have all types of fittings, hoses, radiator brands, etc in my loop. That's a common misconception by people that haven't done one before. Have you ever installed water cooling before or at least read what it does? It lowers the temperature of your components as a whole. So say normally an Nvidia card is running at 80c. Under a water loop it would be at 28c - 35c tops idling and on the worst days somewhere in the 40c - 45c range under full load if you have the worst case in the world. Since the card is running cooler it uses less electricity to do the same amount of work. So now you can undervolt the card and maintain max clocks 24/7. Also it is dumping less heat into the case. So your motherboard, hard drives, CPU, etc are running cooler.

The more you try to explain yourself, the less I understand.

Generally, custom loops are for those who buy top of the line components and want to squeeze out every last bit of performance.

Either that, or, they are for those want to add aesthetics to their PCs.

Your case is clearly not the former (since Radeon RX Vega 64 could hardly be considered a top of the line component), so is probably the latter.
 
Last edited:
Well Vega 20 will come but won't be for gaming, Navi end of this year which at this point we really don't have much info on other than the roadmap stuff and that is all bullet points.....

I don't think if stacking GPU's will ever work lol, would be interesting though. HBM is quite expensive, I would expect that cost to reflect even more on stacked GPU's.
That won't stop AMD from trying :).
 
The more you try to explain yourself, the less I understand.

Generally, custom loops are for those who buy top of the line components and want to squeeze out every last bit of performance.

Either that, or, they are for those want to add aesthetics to their PCs.

Your case is clearly not the former (since Radeon RX Vega 64 could hardly be considered a top of the line component), so is probably the latter.

razor1 Ieldra can you explain to this person what a water loop is for? Seems that everything that I typed and I thought I had explained went over their head it seems. I don't know what else to say. I'm at a loss for words.
 
The more you try to explain yourself, the less I understand.

Generally, custom loops are for those who buy top of the line components and want to squeeze out every last bit of performance.

Either that, or, they are for those want to add aesthetics to their PCs.

Your case is clearly not the former (since Radeon RX Vega 64 could hardly be considered a top of the line component), so is probably the latter.


Keeping a core/vram cool as possible specially like Rx Vega its full potential of performance can be seen. It will beat out a 1080, at the price of the 1080. It will also use less power than what its rated for in air cooled situations (still higher than a 1080, but the trade off is its the cost of the 1080)

Cagey likes AMD, and he even states this. Doing what he is doing with water cooling, cause he also likes to make his system look awesome he gets everything he wants. He gets performance at the level he likes, he gets to use less power than the stock rx Vega and he gets the looks he his looking for. It might cost him a bit more on the water cooling side of things, but he is getting what he wants and the looks he wants, that is more important that anything else :). He will be extremely happy with what he has since its will perform better than any other vega rx system out of the box and he is customizing it to his content.

If he goes with a 1080ti he will spend a lot more, 200 - 300 bucks right off the bat for the card, and I'm not sure what stuff he is using for his water cooling right now, but he might not be able to use those components. And he also won't get that look on the water block for the GPU.
 
Keeping a core/vram cool as possible specially like Rx Vega its full potential of performance can be seen. It will beat out a 1080, at the price of the 1080. It will also use less power than what its rated for in air cooled situations (still higher than a 1080, but the trade off is its the cost of the 1080)

Cagey likes AMD, and he even states this. Doing what he is doing with water cooling, cause he also likes to make his system look awesome he gets everything he wants. He gets performance at the level he likes, he gets to use less power than the stock rx Vega and he gets the looks he his looking for. It might cost him a bit more on the water cooling side of things, but he is getting what he wants and the looks he wants, that is more important that anything else :). He will be extremely happy with what he has since its will perform better than any other vega rx system out of the box and he is customizing it to his content.

If he goes with a 1080ti he will spend a lot more, 200 - 300 bucks right off the bat for the card, and I'm not sure what stuff he is using for his water cooling right now, but he might not be able to use those components. And he also won't get that look on the water block for the GPU.

Also if he does goes with 1080ti and want adaptive sync, he will have to spend another couple of hundreds for a gsync monitor.
 
_mockingbird Here is a basic water cooling starter kit that costs $137 on Amazon. It is expandable which means that you can disconnect the fittings and change them how you see fit! You can add a video card block like I linked earlier to the loop easily. There are plenty of others by other brands. I just think the Swiftech stuff is really nice.
https://www.amazon.com/Swiftech-H24...=UTF8&qid=1504029985&sr=8-3&keywords=swiftech

61SbovGCeIL._SL1200_.jpg



Now before you say that it is EXPENSIVE, here is a typical Corsair Hydro Series H110i pump and radiator that is NOT expandable and lots of enthusiasts buy. For example the Intel i9 crowd uses these. It is $124.
https://www.amazon.com/Corsair-Extr...F8&qid=1504030315&sr=1-1&keywords=corsair+110

So for $13 more you can get an EXPANDABLE loop! This is the basics for a water loop. Can you buy more expensive stuff? Sure can! Here are some examples.

Thermaltake $284.
https://www.amazon.com/Thermaltake-...504029895&sr=8-8&keywords=thermaltake+pacific

EKWB starter kit is $254.
https://www.amazon.com/EKWB-EK-KIT-...UTF8&qid=1504029937&sr=8-10&keywords=ekwb+kit

So all you need to do is buy the $5 heat sink kit for your video card and the $70 universal block that will fit every video card made in the last decade I bet and add it to the water loop kits that I just linked.

Or you can buy one of those fancy $125 GPU blocks that does the SAME thing and you have to replace it every new video card that you purchase.

In the end you can end up with a completely silent system that runs cooler than an air cooled system. The components will use less electricity also! And no more thermal throttling, or AMD / Nvidia GPU core frequency bouncing all around. It will just sit at max all the time.

razor1 Thanks for the explanation!
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
The more you try to explain yourself, the less I understand.

Generally, custom loops are for those who buy top of the line components and want to squeeze out every last bit of performance.

Either that, or, they are for those want to add aesthetics to their PCs.

Your case is clearly not the former (since Radeon RX Vega 64 could hardly be considered a top of the line component), so is probably the latter.
A GTX 1080 (I do not consider a 1080 mid tear,it will beat a 980ti handidly, as to get any better you have to go 1080ti) is a top of the line componet, and since Liquid cooled Vega 64 easily beats a GTX 1080 a custome cooled Vega 64 is certainly top of the line.
 
Don't forget that EKWB also has the all aluminum fluid gaming kits as another affordable option at $159 for the 240mm radiator or $239 for the one that also includes a Pascal block. Unsure when the aluminum vega block will be added, but it has been stated as a planned release.
 
razor1 Ieldra can you explain to this person what a water loop is for? Seems that everything that I typed and I thought I had explained went over their head it seems. I don't know what else to say. I'm at a loss for words.

Keeping a core/vram cool as possible specially like Rx Vega its full potential of performance can be seen. It will beat out a 1080, at the price of the 1080. It will also use less power than what its rated for in air cooled situations (still higher than a 1080, but the trade off is its the cost of the 1080)

Cagey likes AMD, and he even states this. Doing what he is doing with water cooling, cause he also likes to make his system look awesome he gets everything he wants. He gets performance at the level he likes, he gets to use less power than the stock rx Vega and he gets the looks he his looking for. It might cost him a bit more on the water cooling side of things, but he is getting what he wants and the looks he wants, that is more important that anything else :). He will be extremely happy with what he has since its will perform better than any other vega rx system out of the box and he is customizing it to his content.

If he goes with a 1080ti he will spend a lot more, 200 - 300 bucks right off the bat for the card, and I'm not sure what stuff he is using for his water cooling right now, but he might not be able to use those components. And he also won't get that look on the water block for the GPU.

What I am saying is that:

cageymaru said that he uses custom loops because he wants his components to run cool and consume less power.

If that's truly the case, why would he buy a Radeon RX Vega 64?

It uses more power and generates more heat than the Geforce GTX 1080/1080 Ti

It seems to contradict his own stated goal of having his components running cooler and using less power.

He could have as easily put a Geforce GTX 1080/1080 Ti in a custom loop.
 
Back
Top