Vega Rumors

Its impossible right now and extremely expensive with the way current chip architectures are. Pretty much L1 cache data and register space will have to be shared across both dies. Currently there is just no way to do this without introducing a global cache (L2) which increases latency then the latency of the communication lanes.

Now if that latency can be hidden by software then yeah it will work, but that is a lot of work and totally different programming methodology. Raja stated this with the introduction of Fury Pro, it was to help push devs to start thinking differently in their approach to making gaming engines. Its going to take a long time to do that though.

It's not impossible right now, it's just no one has done it yet. There'd need to be architecture changes certainly, and it'd be a lot of work. But that's kind of my point there, it's a lot of designer man-hours to put something together. And as of now, there's not even much theoretical evidence that it would significantly improve performance.
 
It's not impossible right now, it's just no one has done it yet. There'd need to be architecture changes certainly, and it'd be a lot of work. But that's kind of my point there, it's a lot of designer man-hours to put something together. And as of now, there's not even much theoretical evidence that it would significantly improve performance.


Its impossible, just edited my post to show you the theoretical side of it at least on the CPU side, nV and AMD don't really talk about how their caches work indepth to the point that we can contrast them, but they should work similar to this as we do have the theoretical latency hits of different level caches on GPU's at least for GCN it was in one of their white papers.
 
I was discussing the higher end of the RX product line, and suggesting before Xmas AMD will have a Vega x2. Certainly before their competitor releases their newest game card.

As a high end gamer, I am more interested in what infinity fabric brings, then dismissing it all together.
 
I was discussing the higher end of the RX product line, and suggesting before Xmas AMD will have a Vega x2. Certainly before their competitor releases their newest game card.

As a high end gamer, I am more interested in what infinity fabric brings, then dismissing it all together.


Currently GPU's right now can't use infinity fabric the way the GPU architecture is at least not for replacement of mGPU tech. from a theoretical stand point mesh technology is only part of the equation. How the internal data of each GPU or multiple GPU's is shared is the other half of the equation. Without that being solved, this discussion about infinity fabric being the means to the end of mGPU tech is moot, and you still have the latency issues of the infinity fabric to consider, if its right now too slow for CPU's which it is as we can see with Ryzen, Epyc, etc. GPU's need at least what 10 x the bandwidth over CPU's? That is the bandwidth infinity fabric has to provide for a GPU to have the same latency we see with CPU's.
 
Its impossible, just edited my post to show you the theoretical side of it at least on the CPU side, nV and AMD don't really talk about how their caches work indepth to the point that we can contrast them, but they should work similar to this as we do have the theoretical latency hits of different level caches on GPU's at least for GCN it was in one of their white papers.

You mentioned a solution yourself, using the driver stack or some other tricks to overcome the L2 latency hit. My point is there's no hardware reason you couldn't build a GPU this way. It's not impossible. It may not be a good idea. It may cost 300% more and only give you 15% more performance. Or might cost more and the latency hit would actually give you LESS performance. I'm not sure where the R&D on this topic stands at AMD or NV, but given that neither of them have even mentioned the concept, I'm guessing they're either way early in studying the problem, or they've done initial studies and seen no reason to go further.
 
You mentioned a solution yourself, using the driver stack or some other tricks to overcome the L2 latency hit. My point is there's no hardware reason you couldn't build a GPU this way. It's not impossible. It may not be a good idea. It may cost 300% more and only give you 15% more performance. Or might cost more and the latency hit would actually give you LESS performance. I'm not sure where the R&D on this topic stands at AMD or NV, but given that neither of them have even mentioned the concept, I'm guessing they're either way early in studying the problem, or they've done initial studies and seen no reason to go further.


L2 cache hit is only a small fraction of the latency though, the latency introduced by using infinity fabric between the to 2 GPU's or more, is going to be 200 or 300% more.

Yes there is no hardware reason not to build it that way true, and that will come down to cost as you stated earlier and I full heartily agree on that.

actually this problem is a very old problem, even before the advent of SLi and Xfire :), the cost to benefit ratio is just not there yet, it will be in the future, but I really don't think in the near future at least not for gaming. This is why doing mGPU techniques transparent to the developer has not become feasible yet.
 
L2 cache hit is only a small fraction of the latency though, the latency introduced by using infinity fabric between the to 2 GPU's or more, is going to be 200 or 300% more.

Yes there is no hardware reason not to build it that way true, and that will come down to cost as you stated earlier and I full heartily agree on that.

actually this problem is a very old problem, even before the advent of SLi and Xfire :), the cost to benefit ratio is just not there yet, it will be in the future, but I really don't think in the near future at least not for gaming. This is why doing mGPU techniques transparent to the developer has not become feasible yet.
nVidia recently announced that they studied this type of technology, and they will be using it for the next generation after Volta. It proved to be enormously useful. So I don't think we should discount it so easily. I would be shocked to see AMD pull it off with a dual Vega chip though. If they can do it with Navi on a smaller node it could be extremely interesting.
 
L2 cache hit is only a small fraction of the latency though, the latency introduced by using infinity fabric between the to 2 GPU's or more, is going to be 200 or 300% more.

Yes there is no hardware reason not to build it that way true, and that will come down to cost as you stated earlier and I full heartily agree on that.

actually this problem is a very old problem, even before the advent of SLi and Xfire :), the cost to benefit ratio is just not there yet, it will be in the future, but I really don't think in the near future at least not for gaming. This is why doing mGPU techniques transparent to the developer has not become feasible yet.

My knowledge isn't anywhere close to deep enough to analyze the effects of that latency hit, much less how you'd handle it. I'm going by the fact that the design teams at NV and AMD are pretty smart dudes, and they don't seem to be perusing this path.

SLI and CF are near-death IMO, the fewer users that go that route, the less incentive for the devs to support it in drivers or in game. I don't see that turning around anytime soon.
 
nVidia recently announced that they studied this type of technology, and they will be using it for the next generation after Volta. It proved to be enormously useful. So I don't think we should discount it so easily. I would be shocked to see AMD pull it off with a dual Vega chip though. If they can do it with Navi on a smaller node it could be extremely interesting.


Well the L1 cache sizes and register spaces are much larger in Volta! A LOT larger. But doing this for games is a whole new ball park, for HPC and DL that data can be shifted, games have much more dependencies and that causes a different set of problems.
 
My knowledge isn't anywhere close to deep enough to analyze the effects of that latency hit, much less how you'd handle it. I'm going by the fact that the design teams at NV and AMD are pretty smart dudes, and they don't seem to be perusing this path.

SLI and CF are near-death IMO, the fewer users that go that route, the less incentive for the devs to support it in drivers or in game. I don't see that turning around anytime soon.


SLi and Xfire are pretty much dead. nV and AMD are actively looking to get around the problem, and have been for many years (generations now). Its the same problem with distributed computing but on a chip level. If the network in a distributed system can't support the work load, everything slows down right? Now if one piece of that network slows down, the rest of the network also gets held up. Ask yourself what are the weakest links in SLi and Xfire, and that is where the problems will be for this tech too, those problems must be solved to have this tech to be transparent. Its very well documented too.

Back to NUMA aware programming, its the same problems multiple CPU's had before NUMA was introduced, but at a much higher level, because the throughput needs of GPU's are in the factors of 10x more and with the need for transparency is even greater than that. Can't use and HBCC and use main memory as the global cache, just won't work with the latency that will incur, we aren't taking about 200 or 300% any more, we are talking about 1000's of % if that is done. Cause now we have to factor in read, write and access times.
 
Last edited:
  • Like
Reactions: Elios
like this
Agreed, but as I said above, this would be different from Crossfire/SLI as it should be completely transparent.

By magic? Simply having faster interconnects, isn't going to make two chips appear as one.

If it was easy to make it transparent, it would have been done long ago at least at the driver level so it would be completely transparent to game devs.
 
Currently GPU's right now can't use infinity fabric the way the GPU architecture is at least not for replacement of mGPU tech. from a theoretical stand point mesh technology is only part of the equation. How the internal data of each GPU or multiple GPU's is shared is the other half of the equation. Without that being solved, this discussion about infinity fabric being the means to the end of mGPU tech is moot, and you still have the latency issues of the infinity fabric to consider, if its right now too slow for CPU's which it is as we can see with Ryzen, Epyc, etc. GPU's need at least what 10 x the bandwidth over CPU's? That is the bandwidth infinity fabric has to provide for a GPU to have the same latency we see with CPU's.

Infinity fabric has two sides to it, the interconnects & control logic.

If two PCIe buses can run in Crossfire, then two GPUs can communicate over fabric using logic. It is not as difficult as you are making it out to be. Latencies will be much less that SLI/Crossfire, too.

I am not even getting into specifics, but if AMD can put two GPU on one card, they can put two GPUs in one larger SOC. Cash coherency is something that needs working out. But AMD seems to have it working for their APU.
 
You want AMD to put two Vega GPUs on a single card? That would consume over 880 watts when overclocked. Is that even possible? What kind of power connectors would it need? I guess they could undervolt them a bit and maybe get the thing down to 500 watts, but that's still a ton of power for that level of performance, which I assume would fall short of GTX 1070 SLI and perhaps match a single GTX 1080 Ti in games that scale well. Not exactly a "win" for AMD, IMO.

But infinity fabric has magical fairy dust that will cut power consumption in half. You didn't know this? It's powered by the hopes and dreams of deluded AMD fans.

Realistically I could see them someday apply infinity fabric to a GPU half the size of Rx Vega but that's a ways off and I suspect Nvidia MCM wouldn't be far behind.
 
Infinity fabric has two sides to it, the interconnects & control logic.

If two PCIe buses can run in Crossfire, then two GPUs can communicate over fabric using logic. It is not as difficult as you are making it out to be. Latencies will be much less that SLI/Crossfire, too.

I am not even getting into specifics, but if AMD can put two GPU on one card, they can put two GPUs in one larger SOC. Cash coherency is something that needs working out. But AMD seems to have it working for their APU.


err no they don't have it worked out in APU's, APU's are just one GPU which communicate over system memory with the CPU. I'm pretty sure they don't have access to L3 cache in the CPU either.

No the control logic for infinity fabric doesn't see what the GPU control silicon needs are or what is up coming. There is no access to the GPU control silicon data through infinity fabric. Please do go into the specifics, because you will see that it not there.
 
Last edited:
SLi and Xfire are pretty much dead. nV and AMD are actively looking to get around the problem

NIVIDIA has decided to build the largest possible die a given silicon process can yield.
AMD still has middle of road sized GPUs, so i would not rule out CF.
Ironically, next gen NVIDIA will have a larger performance gap over AMD GPUs than Intel has over AMD CPUs.
 
NIVIDIA has decided to build the largest possible die a given silicon process can yield.
AMD still has middle of road sized GPUs, so i would not rule out CF.
Ironically, next gen NVIDIA will have a larger performance gap over AMD GPUs than Intel has over AMD CPUs.


Can never rule anything out with AMD but going CF with Vega on one board, think its going to be a bad idea.

Yeah nV will have a larger perf gap over AMD with Volta, which is really bad for AMD because the turn around time in GPU's is much faster, so design changes when physical limits are hit are more dramatic, with out R&D those design changes suffer.
 
You have no credibility
The thing about internet is that my statements matter more than my credibility. And since so far you not addressing the points, i presume you cannot do it.
There is a control side to the fabric, that you are not addressing, or simple not aware of.
Why, i am aware that it exists. It is one of few faces of Chimera Fabric (that name is more awesome than Infinity Fabric anyways). The issue is that i am indeed unaware of what the hell does it's existence achieve that it is so important to remember.
And what you are overlooking and seem to not get, even in a most rudimentary fashion, even if AMD just created crossfire on a chip, how is that bad for us High End Gamers ?
How is it good? I do not care about what is not bad, i care about what is good. Typical multi GPU did not hit that for a while.
Infinity fabric exists for Vega. Raja said so.
And Lisa Su said that Infinity Fabric exists (or rather used, in fact) in Ryzen. Which one is not a liar? What if in reality it simply exists for every AMD product as a buzzword for their interconnect stuff? Shocking realization, i know.
But you do have to defend why you pretend it doesnt exist, even after being told and knowing what Raja said.
I can safely pretend it does not exist because it is not a physical layer development. It can work over PCI-E but it won't exceed it's capabilities by meaningful margin no matter what it does. It can work over memory, but once again, memory controller won't work faster from that.

Oh, you unintentionally brought up DMA over there. Well, i am glad to let you know that Crossfire already uses DMA over PCI-E (and in fact, AMD used that ability to slap an SSD onto Fury X and call it a new product... which it was).. And it does not help it one bit.
 
I love all the arm chair engineers in here that say things are impossible, yet a true engineer never thinks something is impossible just cost prohibitive ;) Also some of you should join the government as they like to say this is way things have always been done. I dont think will continue to use a GPU the same way as we always have, I am sure a engineer will come up with a brilliant idea for a different way at some point. I wish I got a buck every time someone on here said something was impossible as I would be the richest guy in the world :)
 
I love all the arm chair engineers in here that say things are impossible, yet a true engineer never thinks something is impossible just cost prohibitive ;) Also some of you should join the government as they like to say this is way things have always been done. I dont think will continue to use a GPU the same way as we always have, I am sure a engineer will come up with a brilliant idea for a different way at some point. I wish I got a buck every time someone on here said something was impossible as I would be the richest guy in the world :)
lol. You did make me laugh!
 
I love all the arm chair engineers in here that say things are impossible, yet a true engineer never thinks something is impossible just cost prohibitive ;) Also some of you should join the government as they like to say this is way things have always been done. I dont think will continue to use a GPU the same way as we always have, I am sure a engineer will come up with a brilliant idea for a different way at some point. I wish I got a buck every time someone on here said something was impossible as I would be the richest guy in the world :)


That is why I stated with the current GPU architecture its not feasible. There needs to be a big change in the way the data is routed through the GPU, without that change its not even remotely possible to have transparency between any number of GPU's. Unless you want performance of two dies being like 50% of one of the single dies? Would that be good for you? Pay more for less? It will be worse than SLi or Xfire.
 
from the MCM paper from nV

2.2 Multi-GPU Alternative
An alternative approach is to stop scaling single GPU performance,and increase application performance via board- and system-level integration, by connecting multiple maximally sized monolithic GPUs into a multi-GPU system. While conceptually simple, multiGPU systems present a set of critical challenges. For instance, work distribution across GPUs cannot be done easily and transparently and requires significant programmer expertise [20,25,26,33,42,50].
Automated multi-GPU runtime and system-software approaches also face challenges with respect to work partitioning, load balancing,and synchronization [23, 49].
Moreover, a multi-GPU approach heavily relies on multiple levels of system interconnections. It is important to note that the data movement and synchronization energy dissipated along these interconnects significantly affects the overall performance and energy efficiency of such multi-GPU systems. Unfortunately, the quality of interconnect technology in terms of available bandwidth and energy per bit becomes progressively worse as communication movesoff-package, off-board, and eventually off-node, as shown in Table 2 [9,13,16,32,46]. While the above integration tiers are an
essential part of large systems (e.g. [19]), it is more desirable to reduce the off-board and off-node communication by building more capable GPUs
.

Doesn't this sound remarkably like what I was saying? Don't need to be an engineer in EE to have understanding of what is going on. Programming is more than enough to have this understanding because its heavily leveraged by programming models currently, we know what and why SLi and Xfire fail. Going from software to hardware, if you don't understand how hardware functions, then you can't write good code that is the way it is. Doesn't mean that person can build the hardware though, just need to know how it functions.



http://research.nvidia.com/sites/default/files/pubs/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs//p320-Arunkumar.pdf


now lets go through this paper shall we, and see if what I have been saying is actually on the money or not? That first part definitely was on the money.

3.3 On-Package Bandwidth Considerations

3.3.3 On-Package Link Bandwidth Configuration.


Hmm did I mention inter GPU bandwidth?
We propose three mechanisms to minimize inter-GPM bandwidth
by capturing data locality within a GP

Did I mention data locality?

Yes I did lol.

Guess what 2 for 2 so far.

5.1 Revisiting MCM-GPU Cache Architecture
5.1.1
Introducing L1.5 Cache

Wow make that 3 for 3, mentioned cache too right?

5.2 CTA Scheduling for GPM Locality

Mentioned the control silicon vs data locality? Yeah did that too 4 for 4! Amazing I must have pulled all those things out of my ass :).

There ya have it those are the changes need to be done. Interesting why is it an arm chair engineer can understand these things without even reading that paper?

So you tell me you want to read the research from nV (which was posted a month ago on this forum?) and their approach and shown to your ridiculous types of comments like "arm chair engineer" or understand that some people on this forum actually know their shit about what they are talking about? I've been saying the same shit since oh a year ago?
 
Last edited:
Yes, generic as written.

AFR is how current x2 work. Change that, you change everything. Your whole argument stems from something in the uber faar off, unobtainable future.

Yet Vega still has Infinity fabric !
I get you do not want to admit AMD has been spending millions on HSA solving those problems. And developed new coherent cache. They also said their uarch is smarter, so developers need to let Vega handle code, instead of traditionally means. Closer to the metal.

Everything you say cant happen, seems to be designed in AMD's APU. And are working on piggybacking that through their control fabric.


Lastly, for as much as you seem to know, you seem to dismiss nearly everything about Infinity Fabric. It seem that it can't do anything good & is just wasted IP.

I don't believe you have any idea how AMD is going to use Infinity Fabric with their GPUs.
 
Yes, generic as written.

AFR is how current x2 work. Change that, you change everything. Your whole argument stems from something in the uber faar off, unobtainable future.

Yet Vega still has Infinity fabric !
I get you do not want to admit AMD has been spending millions on HSA solving those problems. And developed new coherent cache. They also said their uarch is smarter, so developers need to let Vega handle code, instead of traditionally means. Closer to the metal.

Everything you say cant happen, seems to be designed in AMD's APU. And are working on piggybacking that through their control fabric.


Lastly, for as much as you seem to know, you seem to dismiss nearly everything about Infinity Fabric. It seem that it can't do anything good & is just wasted IP.

I don't believe you have any idea how AMD is going to use Infinity Fabric with their GPUs.


Did you read the nV paper? did you see a 256 CU part would need 700+ GB/s bandwidth? How do you propose infinity fabric is going to support that with how many more SP's were are talking about?

This isn't an exercise in academia, when you can't see the limitations of what is possible currently.

Oh so you don't want to get into the nuts and bolts of how they are doing things currently and what needs to be done to achieve full transparency in multi GPU's?

I get you do not want to admit AMD has been spending millions on HSA solving those problems. And developed new coherent cache. They also said their uarch is smarter, so developers need to let Vega handle code, instead of traditionally means. Closer to the metal.


Tell me how will coherent cache and what mechanisms are needed for it to function across two seperate gpu's and what those limitations are right now?

You make it sound like this problem can be solved overnight magically with infinity fabric.

IT CAN"T! If it was that simple, it would have been done years ago. Do you know Hyper transport has most of the functionality of Infinity fabric? Yes it does. Infinity Fabric is the upgrade to Hyper transport lol.

The things that are being done with HSA WILL NOT SOLVE the problems on the gaming side of things, not entirely.

The nV whitepaper that I linked, is specific for HPC applications, not games. Games have hard dependencies which many HSA applications do not have. You can't just expect things to work the same way across the different types of apps.
 
Last edited:
Yes, did you read the AMD one..?

NVidia has their own troubles and going down their own route. AMD has been actually tackling those problem with their HSA pursuance.
 
Yes, did you read the AMD one..?

NVidia has their own troubles and going down their own route. AMD has been actually tackling those problem with their HSA pursuance.


yes I did, and it has the same things, its talks about the exact same things. I can't believe you just said that. AMD did not fully tackle those problems only through HSA. Nor will it ever be able to do via API, right now there is a HUGE Programming side to it, does and don't. The burden is on the application dev currently. Its actually VERY similar to what NUMA is like! And none of this will ever transfer over to gaming, not in its current form, its A LOT of headaches to do these things in real time because the chance of hiding the latency is extremely hard in a real time application vs applications that can just crunch numbers and spit out information when ever they need to.
 
I love all the arm chair engineers in here that say things are impossible, yet a true engineer never thinks something is impossible just cost prohibitive ;) Also some of you should join the government as they like to say this is way things have always been done. I dont think will continue to use a GPU the same way as we always have, I am sure a engineer will come up with a brilliant idea for a different way at some point. I wish I got a buck every time someone on here said something was impossible as I would be the richest guy in the world :)
I do claim that with what AMD presently has, making a viable MCM GPU that will not act as Xfire setup is not cost prohibitive... it is impossible.
Did you get a buck?
 
Bahahaha so some people believe AMD is so advanced it can pull of magic... but at the same time it takes 400W to match a 1080? Something isn't adding up for me! :D


Well it is Advanced Micro Devices ;)

damn just because of a magical interconnect AMD can do miracles. Its like if I upgrade my LAN to Cat 6 from CAT 5e, its going to change my life.......
 
Bahahaha so some people believe AMD is so advanced it can pull off magic... but at the same time it takes 400W to match a 1080? Something isn't adding up for me! :D
Everything true. But this bullshit about 400w is getting annoying. Yea Vega seems to be a bust. But it's not using 400w lol. You have a valid point but don't lose credibility by exaggerating stuff like that. Stock Vega won't use 400w, looks like it will be around 275w for the gaming Vega from the leaked specs or at worst 300w. Criticize a product and stick to true facts anything else makes you sound less credible.
 
Everything true. But this bullshit about 400w is getting annoying. Yea Vega seems to be a bust. But it's not using 400w lol. You have a valid point but don't lose credibility by exaggerating stuff like that. Stock Vega won't use 400w, looks like it will be around 275w for the gaming Vega from the leaked specs or at worst 300w. Criticize a product and stick to true facts anything else makes you sound less credible.

We'll see. Only a few days left!

I am still curious about that blind test where it won against a 1080ti. It'd be nice to have it more in depth but at the end of the day... stock vs stock Vega did beat the 1080ti where it really matters.
 
There could be a multitude of reasons for Vega to be better than 1080ti, I suspect that the difference in panel type might have subconsciously affected the result.

This is me speculating however. But after digging down the details, I am starting to really doubt how AMD went about their comparisons and how they arrived at the $300 cheaper conclusion, and most of the possible scenarios I could come up with have all been in nVidia's favour, with only the most unlikely scenarios (EG Vega being priced at 1070 prices) favouring AMD.
 
We'll see. Only a few days left!

I am still curious about that blind test where it won against a 1080ti. It'd be nice to have it more in depth but at the end of the day... stock vs stock Vega did beat the 1080ti where it really matters.

It didn't win. It's just their point is once you have gsync and freesync and with that particular setup you may not see a difference. 1080ti could be getting 20 more fps but it's not evident due to human perception. When you don't have fps and it's helped with freesync you can't tell a difference. Also amd could be using freesync 2 hdr monitor that might make a setup look cleaner, that's just a guess since we don't know the actual setup. I will be very surprised if it meets 1080ti in any game. Unless amd is truly sand bagging until the last moment.
 
It didn't win. It's just their point is once you have gsync and freesync and with that particular setup you may not see a difference. 1080ti could be getting 20 more fps but it's not evident due to human perception. When you don't have fps and it's helped with freesync you can't tell a difference. Also amd could be using freesync 2 hdr monitor that might make a setup look cleaner, that's just a guess since we don't know the actual setup. I will be very surprised if it meets 1080ti in any game. Unless amd is truly sand bagging until the last moment.

People chose it over the nVidia setup. How is that not winning?

Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.
 
yes I did, and it has the same things, its talks about the exact same things. I can't believe you just said that. AMD did not fully tackle those problems only through HSA. Nor will it ever be able to do via API, right now there is a HUGE Programming side to it, does and don't. The burden is on the application dev currently. Its actually VERY similar to what NUMA is like! And none of this will ever transfer over to gaming, not in its current form, its A LOT of headaches to do these things in real time because the chance of hiding the latency is extremely hard in a real time application vs applications that can just crunch numbers and spit out information when ever they need to.

I am glad you have scaled down your argument, to something less obnoxious.

I agree with you on the most part and understand the complexities involved. I just do not think you are grasping how far out ahead AMD in this department. It has nothing to do with Nvidia, but AMD is at the forefront of Heterogeneous Computing (HSA). And have an answer for the many things you claim they don't. Also, lets not forget that AMD has been pursuing associated technology, that fit nicely into their HSA ecosystem. AMD is simply headed in a different direction than Nvidia.

You are so sure of yourself, yet are missing some simple things & unable to admit that Raja said it's coming. Even so, I don't even see you speculating on HOW, or WHAT infinity fabric could mean for Gamers. Or allowing yourself to speak about the goodness it could bring. All you want to do is badmouth the situation, and claim AMD is less competent than you. All the while, I personally watch a Developer on stage at C&C claim near 100%n scalability using Vega. Didn't that ring a bell to anyone, of where AMD might be going with their RX Gaming line, in the immediate future. I foresee AMD making "TitanRipper" happen before Xmas. Then leverage Vega x2 on their push to 7nm Navi, with further advancements in Infinity Fabric (Infinity Fabric 2.0). In a tick-tock cadence. I bet we might even see a Vega x4 at some point nearing 2019.


I'll trust what Dr Su & Raja have said, over you. Sorry !
 
  • Like
Reactions: Boil
like this
People chose it over the nVidia setup. How is that not winning?

Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.

Nor AMD won nor Nvidia won.. freesync panel won over Gsync panel on the monitor battle, That's all.. the test was about panels.. rest of the system were irrelevant due that fact.
 
Also amd could be using freesync 2 hdr monitor that might make a setup look cleaner, that's just a guess since we don't know the actual setup.

Quick google-fu says that such Ultrawide monitors do not exist yet. The closest is a 3840x1080 49" 'really f-ing wide". The other FreeSync 2's are 1440p 144hz's
 
Nor AMD won nor Nvidia won.. freesync panel won over Gsync panel on the monitor battle, That's all.. the test was about panels.. rest of the system were irrelevant due that fact.

Not saying I disagree the panels were probably more important and AMD likely chose certain one for a reason... but not like Vega can do gsync or nVidia can do freesync. So it does include the card.

I would have liked to see how it would have gone with motion blur off. It sounded like nVidia uses more aggressive motion blur IMO.
 
I am glad you have scaled down your argument, to something less obnoxious.

I agree with you on the most part and understand the complexities involved. I just do not think you are grasping how far out ahead AMD in this department. It has nothing to do with Nvidia, but AMD is at the forefront of Heterogeneous Computing (HSA). And have an answer for the many things you claim they don't. Also, lets not forget that AMD has been pursuing associated technology, that fit nicely into their HSA ecosystem. AMD is simply headed in a different direction than Nvidia.

You are so sure of yourself, yet are missing some simple things & unable to admit that Raja said it's coming. Even so, I don't even see you speculating on HOW, or WHAT infinity fabric could mean for Gamers. Or allowing yourself to speak about the goodness it could bring. All you want to do is badmouth the situation, and claim AMD is less competent than you. All the while, I personally watch a Developer on stage at C&C claim near 100%n scalability using Vega. Didn't that ring a bell to anyone, of where AMD might be going with their RX Gaming line, in the immediate future. I foresee AMD making "TitanRipper" happen before Xmas. Then leverage Vega x2 on their push to 7nm Navi, with further advancements in Infinity Fabric (Infinity Fabric 2.0). In a tick-tock cadence. I bet we might even see a Vega x4 at some point nearing 2019.


I'll trust what Dr Su & Raja have said, over you. Sorry !


Do you release nV with CUDA have more features then HSA as a whole by everyone involved? LOL and you say they are behind? Do you release nV's mesh technology was released before Infinity Fabric and has more forward looking features implemented and being used currently? Something AMD is in the process of implementation in future generations of GPU? AMD has no presence in the HPC (HSA) market and you want to sit here and post about things that aren't even widely used in the WORLD?

If you want to look things up before you spout out who is talking shit and who is being obnoxious or who thinks he knows what he is talking about because that person read a few marketing presentations by AMD. that would be a better way to go! You can look at many of the talks here about HSA vs CUDA and AMD vs nV implementation. It has been discussed many times over.

RAJA NEVER STATED MGPU WILL BE REPLACED ANY TIME SOON!

That is BS. What HE STATED WAS, mGPU was the near future, later on it will be replaced with technology that will be transparent that is the ultimate goal, but in the mean time DEVELOPERS WILL NEED TO GET THERE HANDS DIRTY. THERE WAS NO TIME LINE ON THAT IMPLEMENTATION of transparency. Please don't sit here and make things up about what Raja stated, because I can link everything he stated.

If I'm badmouthing the situation when pulling up relevant facts which you can't even discuss because well maybe you just can't understand it? Hmm? You were the one that stated you don't want to mention specifics, so I did, and now you are just talking BS. Counter what I stated please? Instead of this rhetoric and BS AMD marketing campaigns which never were talking about? Its not badmouthing when saying its not going to happen with current tech because of the limitations that the current tech has, its just reality. Also AMD doesn't need this solution to be competitive, if you have two bad chips glued together, you still get one bad solution. AMD needs to rework their core GPU architecture to get anywhere. Not stick two ICBM's in one package and call it a day.

I'm so damn sure about it because I'm a programmer and have been programming multi gpu applications for years and working on them in many different capacities. The entire graphics and programming pipeline. I have also done HPC and AI work in the past too, I am familiar with the capability of software vs hardware, what can be done, and what can't be done currently. This is exactly why I knew about the NUMA issues Ryzen, Epyc, will have when we saw the CCX issues the second the issue with the latency popped up (actually even before that, the results were too obvious to ignore). It was something I saw before, you know with what architecture? Yeah Pentium D. Granted Pentium D had other issues, but a shared front side bus just doesn't work well when latency is concerned, just not enough bandwidth to supply both chips properly.
 
Last edited:
Back
Top