Vega Rumors

What I am saying is that:

cageymaru said that he uses custom loops because he wants his components to run cool and consume less power.

If that's truly the case, why would he buy a Radeon RX Vega 64?

It uses more power and generates more heat than the Geforce GTX 1080/1080 Ti

He could have as easily put a Geforce GTX 1080/1080 Ti in a custom loop.

It seems to contradict his own stated goal of having his components running cooler and using less power.

No it doesn't. He wants a Vega specifically, and wants that to run cooler.
 
Don't forget that EKWB also has the all aluminum fluid gaming kits as another affordable option at $159 for the 240mm radiator or $239 for the one that also includes a Pascal block. Unsure when the aluminum vega block will be added, but it has been stated as a planned release.
Yup EK A240G, cpu/gpu water looping on the cheap, though the only thing you have a worry about is galvanic corrosion, but shouldn't be an issue as long you don't mixed water blocks.
 
What I am saying is that:

cageymaru said that he uses custom loops because he wants his components to run cool and consume less power.

If that's truly the case, why would he buy a Radeon RX Vega 64?

It uses more power and generates more heat than the Geforce GTX 1080/1080 Ti

It seems to contradict his own stated goal of having his components running cooler and using less power.

He could have as easily put a Geforce GTX 1080/1080 Ti in a custom loop.


he want's Vega to run cooler and use less power. Plus he won't be able to get those waterblocks for any nV card that I know of right now.
 
No it doesn't. He wants a Vega specifically, and wants that to run cooler.

So what you are saying is that he made an irrational decision.

he want's Vega to run cooler and use less power. Plus he won't be able to get those waterblocks for any nV card that I know of right now.

Geforce GTX 1080 Ti has been released for 5 months.

Geforce GTX 1080 has been released for over a year.

That's hard to believe that he won't be able to get water blocks for those cards.
 
So what you are saying is that he made an irrational decision.



Geforce GTX 1080 Ti has been released for 5 months.

Geforce GTX 1080 has been released for over a year.

That's hard to believe that he won't be able to get water blocks for those cards.


he can get waterblocks but not those specific ones, that look like that. Its not a irrational decision, he knows what he is getting lol, and he likes what he is getting, nothing wrong with that. It might be irrational to you, but everyone has their preferences. Its not a bad purchase, if he was getting a rx 580 @ gtx 1080 or 1070 prices yeah that is a bad decision lol. He isn't doing that.
 
he can get waterblocks but not those specific ones, that look like that.

He said and I quote "I care zero about aesthetics".

He can get a different one that doesn't look like that.

Its not a irrational decision, he knows what he is getting lol, and he likes what he is getting, nothing wrong with that. It might be irrational to you, but everyone has their preferences. Its not a bad purchase, if he was getting a rx 580 @ gtx 1080 or 1070 prices yeah that is a bad decision lol. He isn't doing that.

Well, if this decision is not irrational, surely he would be able to explain his decision in a cohesive way.
 
So what you are saying is that he made an irrational decision.


Geforce GTX 1080 Ti has been released for 5 months.

Geforce GTX 1080 has been released for over a year.

That's hard to believe that he won't be able to get water blocks for those cards.

Not really the Zotac 1080 AMP I have has no water blocks yet, tho I hear they are finally making one and it's been on the market for quite some time. People get what they want, when you pay his bills then I guess you can buy him what you want, until then you need to relax a bit. Not like he is stopping you from buying a 1080ti.
 
Not really the Zotac 1080 AMP I have has no water blocks yet, tho I hear they are finally making one and it's been on the market for quite some time. People get what they want, when you pay his bills then I guess you can buy him what you want, until then you need to relax a bit. Not like he is stopping you from buying a 1080ti.

He can buy what he want. I never said otherwise.

I just question his rationality.

He actually has, whether you chose to accept it is your bussiness.

What is it?

I can easily explain my logic of reasoning.

Compare to the Geforce GTX 1080, the Radeon RX Vega 64 is:

1. Cheaper? X

2. Faster? X

3. Consume less power? X

Three strikes!
 
So what you are saying is that he made an irrational decision.



Geforce GTX 1080 Ti has been released for 5 months.

Geforce GTX 1080 has been released for over a year.

That's hard to believe that he won't be able to get water blocks for those cards.

Here are water blocks for the GTX GeForce 10 series from EKWB. Plenty of other WB manufacturers make them also.
https://www.ekwb.com/shop/water-blo...idia-geforce/geforce-gtx-10x0-series?limit=36

Koolance in my opinion are the schmexy ones. These things are built like tanks!
http://koolance.com/index.php?route=product/category&path=29_148_46

This GTX 1080 Ti card from Gigabyte has a built in water block and is on Amazon! When I was unsure if I could get a RX Vega 64 for $499, this was my fallback choice of card. It is $849. I don't like spending that much on a video card so... I mean I COULD, but do I really want to? What gaming session is worth $850 to me? Just a personal choice that I made. I would not have felt cheated if I got the Gigabyte as it is scrumptious looking! Thing looks like Halle Berry in a nightgown. Copper is so very pretty! Damn that thing is sexy! :) :) :)

Don't forget that my case flips the video cards 90 degrees also so it would be standing up vertically in the case instead of being horizontal. I think the Gigabyte card is worth every penny.

https://www.amazon.com/Gigabyte-GV-...3&keywords=gigabyte+aorus+geforce+gtx+1080+ti


61oFJ3DC4PL._SL1000_.jpg
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
He can buy what he want. I never said otherwise.

I just question his rationality.



What is it?

I can easily explain my logic of reasoning.

Compare to the Geforce GTX 1080, the Radeon RX Vega 64 is:

1. Cheaper? X

2. Faster? X

3. Consume less power? X

Three strikes!

My card was $499 and will be FASTER than a GTX 1080 afterwards. Look at the WC variant reviews of the RX Vega. razor1 already explained this to you multiple times.

All I can say is accept my reasoning or reject it. I'm not trying to make you buy a RX Vega 64 like you are trying to guilt trip me into buying a GTX 1080. I will just assume that you don't care what my reasoning is and are against whatever reasoning that I present to you.

Which is cool as everyone is different anyways. :)
 
My card was $499 and will be FASTER than a GTX 1080 afterwards. Look at the WC variant reviews of the RX Vega. razor1 already explained this to you multiple times.

All I can say is accept my reasoning or reject it. I'm not trying to make you buy a RX Vega 64 like you are trying to guilt trip me into buying a GTX 1080. I will just assume that you don't care what my reasoning is and are against whatever reasoning that I present to you.

Which is cool as everyone is different anyways. :)

Sure, but is it faster than a liquid cooled GeForce GTX 1080?
 
My card was $499 and will be FASTER than a GTX 1080 afterwards. Look at the WC variant reviews of the RX Vega. razor1 already explained this to you multiple times.

All I can say is accept my reasoning or reject it. I'm not trying to make you buy a RX Vega 64 like you are trying to guilt trip me into buying a GTX 1080. I will just assume that you don't care what my reasoning is and are against whatever reasoning that I present to you.

Which is cool as everyone is different anyways. :)

Sounds like a well thought out prudent decision, also don't you have a Freesync monitor as well?

$499 for a Vega 64 to me is reasonable, while consuming more power it does do better in general in DX 12 and Vulkan api as far as I can tell. Adding better cooling will just enhance the card overall. Look forward to seeing your mods.
 
Sure, but is it faster than a liquid cooled GeForce GTX 1080?


Liquid cooled or not, Pascal hits a wall at 2100mhz, doesn't matter if its liquid cooled, that is pretty much its max performance. Just like Cagy will get a Max performance out of his Vega. I think they will end up damn close to each other and the extra power draw will actually be a wash too. The 1080 at those clocks draws a lot of power too maybe not as much as the 1080 but end up being not that much different. The 1080 will end up over 300 watts (wouldn't be surprised if it hits 350 watts), and the rx Vega will end up 450ish watts, so when going crazy like that doesn't really matter what card ya get.
 
Here are water blocks for the GTX GeForce 10 series from EKWB. Plenty of other WB manufacturers make them also.
https://www.ekwb.com/shop/water-blo...idia-geforce/geforce-gtx-10x0-series?limit=36

Koolance in my opinion are the schmexy ones. These things are built like tanks!
http://koolance.com/index.php?route=product/category&path=29_148_46

This GTX 1080 Ti card from Gigabyte has a built in water block and is on Amazon! When I was unsure if I could get a RX Vega 64 for $499, this was my fallback choice of card. It is $849. I don't like spending that much on a video card so... I mean I COULD, but do I really want to? What gaming session is worth $850 to me? Just a personal choice that I made. I would not have felt cheated if I got the Gigabyte as it is scrumptious looking! Thing looks like Halle Berry in a nightgown. Copper is so very pretty! Damn that thing is sexy! :) :) :)

Don't forget that my case flips the video cards 90 degrees also so it would be standing up vertically in the case instead of being horizontal. I think the Gigabyte card is worth every penny.

https://www.amazon.com/Gigabyte-GV-...3&keywords=gigabyte+aorus+geforce+gtx+1080+ti


61oFJ3DC4PL._SL1000_.jpg

Is it me or is that a two slot water cooling solution? Damn......
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Sounds like a well thought out prudent decision, also don't you have a Freesync monitor as well?

$499 for a Vega 64 to me is reasonable, while consuming more power it does do better in general in DX 12 and Vulkan api as far as I can tell. Adding better cooling will just enhance the card overall. Look forward to seeing your mods.

Going to guilt trip my friend frankmansal into selling me his 1440p IPS FreeSync 34" LG Ultrawide. ;) The reason that I had to wait for mine was that he was messaging me to see if they were on sale yet and I missed the 15 second window! :) :) :) Then he can get one of those GTX 1080 and a GSYNC Ultrawide since he was at work for the Vega 64 launch and they sold out in 15 seconds. ;) :) :) Right now I have a 50" Samsung 6300 series 4K TV with the fake HDR, a Samsung 120Hz 27" monitor, and an Oculus Rift.

Honestly I really want to see the specs on FreeSync 2 monitors that are 40"+ that are potentially in the pipeline. Then frankmansal can keep that puny 34" baby monitor! :)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Liquid cooled or not, Pascal hits a wall at 2100mhz, doesn't matter if its liquid cooled, that is pretty much its max performance. Just like Cagy will get a Max performance out of his Vega. I think they will end up damn close to each other and the extra power draw will actually be a wash too. The 1080 at those clocks draws a lot of power too maybe not as much as the 1080 but end up being not that much different. The 1080 will end up over 300 watts (wouldn't be surprised if it hits 350 watts), and the rx Vega will end up 450ish watts, so when going crazy like that doesn't really matter what card ya get.
As a 1080 user your FPS to mhz correlation becomes significantly small past 1900mhz. It is like a 1 or 2 fps for every 50mhz past that, so I can get my FE to 2050 and with a bit of fiddling with undrclocking and and a high fan profile i can hold it, but the actually performance difference froma 2050 overclock and a 1900mhz boost clock is roughly 10 fps.
 
Going to guilt trip my friend frankmansal into selling me his 1440p IPS FreeSync 34" LG Ultrawide. ;) The reason that I had to wait for mine was that he was messaging me to see if they were on sale yet and I missed the 15 second window! :) :) :) Then he can get one of those GTX 1080 and a GSYNC Ultrawide since he was at work for the Vega 64 launch and they sold out in 15 seconds. ;) :) :) Right now I have a 50" Samsung 6300 series 4K TV with the fake HDR, a Samsung 120Hz 27" monitor, and an Oculus Rift.

Honestly I really want to see the specs on FreeSync 2 monitors that are 40"+ that are potentially in the pipeline. Then frankmansal can keep that puny 34" baby monitor! :)
lol, those 34" Ultrawides are rather nice I have to say so and not so small compared to the rather big 13" monitors of the 80's :D. Anyways that monitor looks sweet! 75hz shouldn't be that bad, mine is 60hz non-FreeSync or GSync and I love it.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Spoiler. Jay loves the extra performance, but doesn't recommend buying Vega 64 at the ridiculous price gouging current pricing. At $499 he says Vega 64 + water block is well worth it.


 
Is it me or is that a two slot water cooling solution? Damn......
Looks like it. I'm not sure how much of that was just to match up with the IO which maintains a DVI input.

e: Side view has it looking like it's closer to single slot in terms of the block. IO still takes 2 slots though.
 
sm6.0 is also a feature set of DX12, all DX12 cards will be capable of doing it. Again don't think of tiers as DX versions, it will get you confused. Don't know what you are mixing the two together, unless you don't know the difference.
Tiers will be separate and related to feature levels. Vega and Volta will probably end up as a DX12.2 feature level given the capabilities present in Vega and likely Volta.

Are you making this up as you go along? That is a total flip flop from what ya said before, you said the drivers weren't ready for Vega so now they don't make a difference because why?
Compute doesn't have the black box that is rasterization. Tiled rasterization and binning don't directly affect compute shaders. Without that black box there is little room for drivers to increase efficiency beyond enabling new features and paths. The compiler is what could make a difference and in most cases is fully functional at launch. That's where much of Maxwell's efficiency came from with the register file cache also enabling higher clocks. The status of Vega here is still unknown.

LOL more BS right? nV has been using scaler for how long?
Yet they've taken how long to get bindless resources enabled by using a SIMD to do the addressing? Assuming you mean scalar and not scaler here. I'm sure those separate INT pipelines in Volta and Vega are just for show anyways.

How much you want to bet GP 104 won't use GDDR6? And at this point I can 100% say they don't need to go to 7nm for Volta, not the gaming versions, nor the professional versions with HBM.
As GP104 is already released with 5X I won't be taking that bet. For GV104 it might have allowed a releaser sooner, but the DRAM market is a whole other debate right now. 7nm mathematically accounts for much of Volta's apparent gains. That and die size in the case of V100. Adding in the hardware schedulers of GCN will take some space.

For laptop and mobile they need HBM, but ceding that market to APUs is probably wise. Same reason Intel already has around 70% of the graphics market. AMD I'd expect is planning on significant gains there.

Why don't you take your analysis of the average consumer elsewhere? It's bad enough having to read your bullshit predictions and boast about your impartiality day after day.

Whats the latest? Volta is copying Vega? Okay. I guess these are your coping mechanisms.

There's nothing remotely resembling ACEs anywhere in the Volta information that has been released, but I guess when your level of understanding is low enough, all of these fairly technical subtleties are lost on you and it all seems like the same thing
  • Volta Multi-Process Service Volta Multi-Process Service (MPS) is a new feature of the Volta GV100 architecture providing hardware acceleration of critical components of the CUDA MPS server, enabling improved performance, isolation, and better quality of service (QoS) for multiple compute applications sharing the GPU. Volta MPS also triples the maximum number of MPS clients from 16 on Pascal to 48 on Volta.
Except I suppose the part of Volta that is identical to AMD's ACEs. Doesn't seem to get covered much, but then advertising a feature that enables async compute similar to GCN would be problematic to market. With Pascal's async compute support developers have to program around.

You're just salty all my predictions are looking spot on, if not conservative, at this point. And for the average consumer, yeah they have been screwed, but that is the goal of any salesman. If a consumer needed something they'd buy it.

Actually there was an article a few months back saying Volta was following GCN in how it operates. It alluded to ACE-like setup but stopped short of indepth detail. I think it was following some event where this information was based on Nvidias presentation, maybe.
https://devblogs.nvidia.com/parallelforall/inside-volta/

It showed up, but it's difficult to market a "new" feature that enables async compute when your prior generations supposedly don't need it.

Tesla has versions that have SSD's too, they just didn't put it on a single card.
Yet the oil and gas industry went to AMD to make a SSG? Red and cinema looking at $7k SSGs to replace huge workstations with multiple Titan cards. Odd for Nvidia to cede a segment of a high margin market to a technology they supposedly have, but they never deployed.

Yeah the unified memory Tesla uses can use a regular SDD to do the same job, now yeah there will be some latency increase but I'm pretty sure that is hidden via drivers.
If that were the case why didn't they do it? The larger issue is that unified memory ideally ties into a host's IOMMU like NVLink on IBM systems. Avoid bothering the CPU for memory management. I'm not a legal expert, but since the ability isn't advertised with Intel or AMD, I'd assume an X86 license is required. AMD managed that with HBCC over PCIe according to documented features, surely Nvidia could do the same. I haven't seen any evidence it won't work with Vega on Intel systems at least.
 
Destiny 2 numbers are out, seems AMD has some work to do:

1080.png

1440.png

2160.png
What where they using to capture the frames? I can't get RIVATuner to work with destiny or really any FPS counter.

EDIT; I was able to use GeforeExperiance built in FPS counter could get as high as 160 but averages seemed to be 120-130. using a GTX 1080 on highest settings and SMAA.
 
Last edited:
Tiers will be separate and related to feature levels. Vega and Volta will probably end up as a DX12.2 feature level given the capabilities present in Vega and likely Volta.

Features designate DX versions all DX12 cards must have the same features, Tiers are not part of that. SM 6.0 is part of DX, not part of Tiers. Is that easy enough to understand?

Compute doesn't have the black box that is rasterization. Tiled rasterization and binning don't directly affect compute shaders. Without that black box there is little room for drivers to increase efficiency beyond enabling new features and paths. The compiler is what could make a difference and in most cases is fully functional at launch. That's where much of Maxwell's efficiency came from with the register file cache also enabling higher clocks. The status of Vega here is still unknown.

LOL really man? Clocks that is all ya got? WTF does GCN not scale with more units like Maxwell, Pascal, or even Keplar? How the fuck do you explain that with what ya just stated? Doesn't add up man.

Yet they've taken how long to get bindless resources enabled by using a SIMD to do the addressing? Assuming you mean scalar and not scaler here. I'm sure those separate INT pipelines in Volta and Vega are just for show anyways.

Volta won't have separate pipelines, hence why Pascal has already went to a mixed precision pipeline. Not that far of stretch to assume that will be more robust with Volta.
As GP104 is already released with 5X I won't be taking that bet. For GV104 it might have allowed a releaser sooner, but the DRAM market is a whole other debate right now. 7nm mathematically accounts for much of Volta's apparent gains. That and die size in the case of V100. Adding in the hardware schedulers of GCN will take some space.

V104 is a bigger chip, it doesn't need to use GDDR6 to get more bandwidth ;) GDDR 6 most likely will only be used in the enthusiast cards of Volta. Large bus will suffice, and my estimiates it will end up around 380-420nm2, which will put it easily in the range of a 386 bit bus, it would be cheaper for nV to use GDDR5x than to use the newly minted GDDR6 vram, by doing so it will give them higher margins.

For laptop and mobile they need HBM, but ceding that market to APUs is probably wise. Same reason Intel already has around 70% of the graphics market. AMD I'd expect is planning on significant gains there.

Let them do that in Desktop segment first before talking about notebook.........:whistle: Still waiting on those magical drivers.....


Except I suppose the part of Volta that is identical to AMD's ACEs. Doesn't seem to get covered much, but then advertising a feature that enables async compute similar to GCN would be problematic to market. With Pascal's async compute support developers have to program around.

Oh they gave use the block diagrams of Volta, SM units and all, go talk over in the nV thread about Volta about that if you like. As I said you didn't get the Hot Chip conference slides......

Go here and post about Volta

https://hardforum.com/threads/nvidia-gifts-first-v100-cards-volta-is-in-the-wild.1940465/page-2


You're just salty all my predictions are looking spot on, if not conservative, at this point. And for the average consumer, yeah they have been screwed, but that is the goal of any salesman. If a consumer needed something they'd buy it.


LOL wow does anyone want to respond to this or point to his predictions? I don't even know how to respond to that, it has been worse that throwing darts blind folded.


https://devblogs.nvidia.com/parallelforall/inside-volta/

It showed up, but it's difficult to market a "new" feature that enables async compute when your prior generations supposedly don't need it.

Still parroting a dead theory I see? They didn't even mention anything about async compute, not only that if that blog reads the way it does it shouldn't even need any special guidance via programming like GCN or Pascal does with doing Async compute.

Yet the oil and gas industry went to AMD to make a SSG? Red and cinema looking at $7k SSGs to replace huge workstations with multiple Titan cards. Odd for Nvidia to cede a segment of a high margin market to a technology they supposedly have, but they never deployed.

Right watch what happens when AMD gets nothing in the professional market in the next 4 years, what will you say then?

Oh forgot, the SSG is just an M2 connection with global memory, Tesla does the same thing just costs more, but it also has more horsepower than a single AMD card, so if you think the Oil and Gas industry went to AMD to make what you stated, you are incorrect. AMD created it as a cheap alternative, and it will work for smaller companies, maybe, even that is tenuous. All depends on how much horsepower they need.

If that were the case why didn't they do it? The larger issue is that unified memory ideally ties into a host's IOMMU like NVLink on IBM systems. Avoid bothering the CPU for memory management. I'm not a legal expert, but since the ability isn't advertised with Intel or AMD, I'd assume an X86 license is required. AMD managed that with HBCC over PCIe according to documented features, surely Nvidia could do the same. I haven't seen any evidence it won't work with Vega on Intel systems at least.


You aren't a legal expert nor are you being accurate there, You assumed wrong again........
 
Last edited:
Last edited:
For the love of (anti)christ Anarchist4000 , Volta MPS is not equivalent to ACEs lol. This is ridiculous. You really don't have a inkling of a clue what you're talking about it and the more ithat sinks in the more awestruck I am by your persistence


Oh no now you have done it, had to point out MPS...........

That boggled my mind multiple applications sharing resources = ACE's and async compute, WOW!
 
Features designate DX versions all DX12 cards must have the same features, Tiers are not part of that. SM 6.0 is part of DX, not part of Tiers. Is that easy enough to understand?
Feature levels designate which tiers are required for a level. Or was Nvidia's support for 12.1 meaningless now? SM6, or at least some instructions within, may fall to different feature levels or at the very least have different performance impacts. GCN has hardware instructions for ballot and wave level operations. Nvidia will be using several instructions to emulate it.

LOL really man? Clocks that is all ya got? WTF does GCN not scale with more units like Maxwell, Pascal, or even Keplar? How the fuck do you explain that with what ya just stated? Doesn't add up man.
What do clocks have to do with this? The RF caching is where the improvements are derived. Higher clocks a side effect of that feature having a shorter path and using less energy. See that paper Nvidia presented and has been linked before. GCN seems to scale fine given parallel submission with DX12 and async for utilization. In the case of Vega, the register caching is the heart of the power and clock features of Maxwell+. It's also a compiler driven feature that is the "software scheduling" that Nvidia uses. If not implemented it could be what's holding back Vega as I've pointed out before. That would very well mean less power or higher clocks. It could also mean higher utilization of different blocks we're contending for register file access. Scalar, texture, vector, LDS, etc. The very lack of that feature would describe the current Vega performance almost perfectly.

Volta won't have separate pipelines, hence why Pascal has already went to a mixed precision pipeline.
What I read was that integer and floating could operate concurrently. Therefore integer providing addressing in the background, much the same way GCN uses the scalar at the same time as the VALUs, but with different waves.

Let them do that in Desktop segment first before talking about notebook.........:whistle: Still waiting on those magical drivers.....
Sounds like the big Relive release later this year, but who knows. Intel dominates desktop and mobile, so no reason not to expect AMD to gain ground if Ryzen performs well. Vega at low voltage and clocks seems fine currently, and that's without everything enabled. Not to mention the packed math already being popular there.

Oh they gave use the block diagrams of Volta, SM units and all, go talk over in the nV thread about Volta about that if you like. As I said you didn't get the Hot Chip conference slides......
Looked over the slides and didn't see much new, beyond obviously Nvidia forgetting about that MPS/ACE feature. Not surprising as it's rather inconvenient to advertise.

Threadsync and the like is an unrelated feature, but still looks like that dynamic warp formation I described a while back. Even though they don't spell it out, but that doesn't matter much to the programmers.

Still parroting a dead theory I see? They didn't even mention anything about async compute, not only that if that blog reads the way it does it shouldn't even need any special guidance via programming like GCN or Pascal does with doing Async compute.
Well they wouldn't as it would look bad. Just look at all the effort they've spent trying to dispell that "myth" with devs designing around it on Nvidia hardware.

Async on GCN just works so there is no real guidance. Not beyond avoiding circular dependencies which is rather obvious in any programming. All the "guidance" as you said just laid out how the higher utilization is derived.

Oh forgot, the SSG is just an M2 connection with global memory, Tesla does the same thing just costs more, but it also has more horsepower than a single AMD card, so if you think the Oil and Gas industry went to AMD to make what you stated, you are incorrect. AMD created it as a cheap alternative, and it will work for smaller companies, maybe, even that is tenuous. All depends on how much horsepower they need.
That's what AMD presented with the Fiji based SSG, so I'd hope there is some basis for it. I wouldn't call a 7k GPU cheap, but it seems to be enabling performance in cinema that wasn't there. Even to the point AMD made AMD Studios to make it even more available. So unless Nvidia is about to flip another switch turning a Titan into an expensive pro card, I'm not seeing the argument.

You aren't a legal expert nor are you being accurate there, You assumed wrong again........
Then perhaps you would care to explain how AMD managed a killer feature for the pro/server market that works on AMD and Intel hardware while Nvidia I guess only had the resources to make it work with IBM? Vendor lock-in despite all those Xeons in the DGX systems? Bit awkward don't you think?
 
For the love of (anti)christ Anarchist4000 , Volta MPS is not equivalent to ACEs lol. This is ridiculous. You really don't have a inkling of a clue what you're talking about it and the more ithat sinks in the more awestruck I am by your persistence

View attachment 34887

https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf

So you think prior to Volta an NV GPU couldn't have one process spawning several kernels ? LOL.
I think you need to study your GPUs a bit more. What you linked is the reworded definition of what ACEs accelerate. Allowing asynchronous, unrelated tasks or processes to schedule across the GPU hardware concurrently. Execution is a separate aspect, but the "performance critical parts of MPS" added with Volta are precisely what ACEs do. As well as providing the virtualization, load balancing, and other features you describe above.

You guys are just mad I pointed that out years ago and you're stuck eating crow all this time. Just be sure to close your browser before running any benchmarks or games.

hahaha i just dont get why hes obsessed with MPS, it's so far removed from ACEs I just don't see the link whatsoever.
If by "so far removed" you mean does the exact same thing, than yes you would be correct.
 
I think you need to study your GPUs a bit more. What you linked is the reworded definition of what ACEs accelerate. Allowing asynchronous, unrelated tasks or processes to schedule across the GPU hardware concurrently. Execution is a separate aspect, but the "performance critical parts of MPS" added with Volta are precisely what ACEs do. As well as providing the virtualization, load balancing, and other features you describe above.

You guys are just mad I pointed that out years ago and you're stuck eating crow all this time. Just be sure to close your browser before running any benchmarks or games.


If by "so far removed" you mean does the exact same thing, than yes you would be correct.
This is nonsense if they do the same thing differently they are still different. Gasoline and electric cars both take you from point a to point b but no one would argue they are the same thing.
 
Feature levels designate which tiers are required for a level. Or was Nvidia's support for 12.1 meaningless now? SM6, or at least some instructions within, may fall to different feature levels or at the very least have different performance impacts. GCN has hardware instructions for ballot and wave level operations. Nvidia will be using several instructions to emulate it.


What do clocks have to do with this? The RF caching is where the improvements are derived. Higher clocks a side effect of that feature having a shorter path and using less energy. See that paper Nvidia presented and has been linked before. GCN seems to scale fine given parallel submission with DX12 and async for utilization. In the case of Vega, the register caching is the heart of the power and clock features of Maxwell+. It's also a compiler driven feature that is the "software scheduling" that Nvidia uses. If not implemented it could be what's holding back Vega as I've pointed out before. That would very well mean less power or higher clocks. It could also mean higher utilization of different blocks we're contending for register file access. Scalar, texture, vector, LDS, etc. The very lack of that feature would describe the current Vega performance almost perfectly.


What I read was that integer and floating could operate concurrently. Therefore integer providing addressing in the background, much the same way GCN uses the scalar at the same time as the VALUs, but with different waves.


Sounds like the big Relive release later this year, but who knows. Intel dominates desktop and mobile, so no reason not to expect AMD to gain ground if Ryzen performs well. Vega at low voltage and clocks seems fine currently, and that's without everything enabled. Not to mention the packed math already being popular there.


Looked over the slides and didn't see much new, beyond obviously Nvidia forgetting about that MPS/ACE feature. Not surprising as it's rather inconvenient to advertise.

Threadsync and the like is an unrelated feature, but still looks like that dynamic warp formation I described a while back. Even though they don't spell it out, but that doesn't matter much to the programmers.


Well they wouldn't as it would look bad. Just look at all the effort they've spent trying to dispell that "myth" with devs designing around it on Nvidia hardware.

Async on GCN just works so there is no real guidance. Not beyond avoiding circular dependencies which is rather obvious in any programming. All the "guidance" as you said just laid out how the higher utilization is derived.


That's what AMD presented with the Fiji based SSG, so I'd hope there is some basis for it. I wouldn't call a 7k GPU cheap, but it seems to be enabling performance in cinema that wasn't there. Even to the point AMD made AMD Studios to make it even more available. So unless Nvidia is about to flip another switch turning a Titan into an expensive pro card, I'm not seeing the argument.


Then perhaps you would care to explain how AMD managed a killer feature for the pro/server market that works on AMD and Intel hardware while Nvidia I guess only had the resources to make it work with IBM? Vendor lock-in despite all those Xeons in the DGX systems? Bit awkward don't you think?


I'm done responding to you, you are so blind I can't see how you just make things up like this and mods just let it fly, you are on my ignore, have fun with your fantasies.
 
This is nonsense if they do the same thing differently they are still different. Gasoline and electric cars both take you from point a to point b but no one would argue they are the same thing.


Dude its coming from a guy that thinks something that is done in software (MPS) that was to help multiple application share resources of the entire GPU, that was/is in Keplar is the same thing as instruction level dispatching LOL, its insulting our intelligence to respond to him.
 
I bought 2 Vega 64s. Why? Mostly because I like AMD (and I've invested in their stock, so I have a stake in the game). I made $18,000 last year on AMD, enough to pay for upgrades and new machines for a long time.

If someone asked me for a recommendation, I'd probably tell them to buy a GTX 1080. I have 2 1080s, and they are great cards.

My whole thing is that I like to support both companies equally. I actually have 4 gaming machines in use, 2 Nvidia/Intel and 2 AMD/AMD. Nice to mix things up and try different parts.

That said, if I only was using a single machine, it would be difficult not to get Nvidia/Intel just on gaming performance.
 
This is why I haven't responded anymore to these "technical" discussions in this thread. It feels like an episode of the Outer Limits involving a merry go round, a cat who incorrectly quotes Nietzsche, and a normal person trying to help who just ends up going insane by the end.

I like the posts about how people who like the chip and know what it is and are excited about building them up in a system. The rest... well, not so much.
 
I bought 2 Vega 64s. Why? Mostly because I like AMD (and I've invested in their stock, so I have a stake in the game). I made $18,000 last year on AMD, enough to pay for upgrades and new machines for a long time.

If someone asked me for a recommendation, I'd probably tell them to buy a GTX 1080. I have 2 1080s, and they are great cards.

My whole thing is that I like to support both companies equally. I actually have 4 gaming machines in use, 2 Nvidia/Intel and 2 AMD/AMD. Nice to mix things up and try different parts.

That said, if I only was using a single machine, it would be difficult not to get Nvidia/Intel just on gaming performance.

It's a fine chip, honestly. Just not a world beater. There's room for multiple players.
 
I care zero about aesthetics as I have all types of fittings, hoses, radiator brands, etc in my loop. That's a common misconception by people that haven't done one before. Have you ever installed water cooling before or at least read what it does? It lowers the temperature of your components as a whole. So say normally an Nvidia card is running at 80c. Under a water loop it would be at 28c - 35c tops idling and on the worst days somewhere in the 40c - 45c range under full load if you have the worst case in the world. Since the card is running cooler it uses less electricity to do the same amount of work. So now you can undervolt the card and maintain max clocks 24/7. Also it is dumping less heat into the case. So your motherboard, hard drives, CPU, etc are running cooler.

Also you can make your build completely silent!

Water cooling can be as cheap or as expensive as you want it to be. I bought an used loop here on [H]ardocp a couple of years ago and have been expanding and changing it every since.

Those blocks I was linking earlier do look really nice and are aesthetically pleasing to the eye. But to save money you should do something generic like this.
http://koolance.com/index.php?route=product/category&path=29_148_46


gpu-230_p1-700x700.jpg
gpu-230_p3-700x700.jpg


Then you can reuse these on every video card that you purchase. So you spend whatever the cost of these one time and that's it. Upgrade time next year? Reuse the same block over and over.

The pragmatist in me says to do this and be done with it. As far as cooling the memory, VRMs, etc on the card, you just slap some generic VGA heat sinks on them with thermal tape. Hell ask EVGA how important thermal tape and thermal pads are. razor1 can tell you about that debacle.

21vezmWFaQL.jpg


If you want it more aesthetically pleasing you can get copper heat sinks and all types of accessories.

I really like this approach.

http://shop.watercool.de/HEATKILLER-GPU-X-Core-60-DIY/en
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I think you need to study your GPUs a bit more. What you linked is the reworded definition of what ACEs accelerate. Allowing asynchronous, unrelated tasks or processes to schedule across the GPU hardware concurrently. Execution is a separate aspect, but the "performance critical parts of MPS" added with Volta are precisely what ACEs do. As well as providing the virtualization, load balancing, and other features you describe above.

You guys are just mad I pointed that out years ago and you're stuck eating crow all this time. Just be sure to close your browser before running any benchmarks or games.


If by "so far removed" you mean does the exact same thing, than yes you would be correct.

I think you've had one swizzle too many
 
I think you've had one swizzle too many


What was that Tensor core capabilities with GCN current ALU's with a swizzle lol, that took the cake, ate the cake and shat it all at the same time. why the hell did Google make tensor cores to begin with if all GCN needed was s swizzle lol.

fa-Bermuda-Rum-Swizzle.jpg
 
Back
Top