Vega Rumors

Since the 'blind test' conversation is being held in two different threads and I don't want to crap in the official thread, my whole issue with the 'blind' (but not really) test is, up until now, video reviews were about the card and the card only. Use the best hardware you can to not bottleneck the card, slap it to a monitor that reveals what the card can (or can't do), warts and all, and presents the data to show what the card is capable of. What this 'test' does is mask what the card can do because it's now a review of the card plus the monitor, not the card itself. Which is no longer a test of the card but a test of the ecosystem in which it resides. Which, frankly, tosses out the whole point of testing the card to begin with.
 
Are you living in the past? Gsync on and Vsync on means no tearing or input lag when frame rates go above the monitors refresh rate. Nvidia solved these problems with the release of Gsync 2..

Battlenonsense and Blurbusters recently did comprehensive input lag tests and G-Sync still falls back to traditional V-sync when the frame rates go beyond the max refresh rate of the monitor that's why they both strongly recommend capping the FPS a couple of frames below the maximum G-sync refresh rate to avoid that. AFAIK, G-sync 2 only adds an HDMI port.
 
See above, and this wasn't Kyle's conditions it was Nvidia's, that's how they wanted it done. And it's funny how not one of the 10 mentioned tearing/stuttering or anything like that at all. In fact, 6 of them couldn't see a difference between the two.
Kyle's conditions do not stop being Kyle's just because Nvidia wanted them. Kyle's test had those conditions set, so they are Kyle's [test] conditions.

And what is going beyond your understanding in the words: " parallel universe "? Yes, that did not happen because it could not happen in this universe.
Really? you would take a Freesync monitor to go with your 1080ti? haha, yes, sure you would. Both monitors get great reviews, both have roughly the same input lag and pixel response time, both come highly recommended for gaming, especially at 100hz.
I would take G-Sync version of it, even though i ain't seen one (mostly because i was not looking), but i would not have issue with Freesync one. I mean, if you constantly sit above refresh rate, who cares about freesync or g-sync.
As for input lag/pixel response, as far as i am aware, those particular VA panels are actually a little faster than LG panel in the 348, few particularly nasty black-gray transitions notwithstanding. And well, getting rid of those nasty transitions is not worth the $400 premium if you ask me.
 
https://videocardz.com/71292/amd-radeon-rx-vega-64-official-pictures-leaked

There is a limited edition

AMD-Radeon-RX-Vega-64-Limited-Air.jpg


Same two 8 pin as FE.
Is it just me or are those really long?
 
seems the same as the FE which is about the same length any of the nV cards its going up against.
 
What? How does any of what you say matter to the test that was done? You are trying to imply that the test favoured AMD somehow, when Kyle himself says that this wasn't the case at all. He goes out of his way to make sure this wasn't case.

The 1080ti can push Doom at 100+ FPS at the resolution used. Read the HardOCP review back in March, it said that the 1080ti ran Doom beautifully and could get nearly 80fps average at 4k and max settings. Since then there have been performance improvements and driver fixes from Nvidia that push that figure higher.

Whereas We know from the FE benchmarks that Vega is only slightly faster than the 1080. And you experts have been saying that RX Vega is only going to be slightly faster than the FE edition. That's still going to be 20% slower than the 1080ti.

Which means in the test at 3440x1440, the min frame rate of the 1080ti is going to stay above 60fps whereas the min frame rates of the Vega system are going to dip below 60fps.

How can it be worst possible case for Nvidia when their card has the best performance in that game?

How can it be the worst possible case for Nvidia if Gsync is better than Freesync? (and have seen several people make this claim)

How can it be the worst possible case for Nvidia when you are putting a $1300 monitor against a $700 one?

My god, the Nvidia system should have just walked away with this test, yet out of 10 people, 6 could tell no difference, 3 preferred the AMD system and 1 preferred the Nvidia system. And these aren't just regular joes off the street, these are gamers and people who are working in this industry.

Does this say anything about the final performance of RX Vega? No of course not. It was never meant to be that. The results were surprising though and even Kyle said that.
If the Vega system was dropping to 60fps and the 1080ti was sustaining 80 fps then sorry no-one would say Vega looked/felt smoother/preferred, importantly also aspects of the engine go beyond just fps and applies to such as texture loading/visual fidelity influenced with performance-movement/etc.

Anyway it can be little things that sway perception and preference and quite a few factors at work.
So you disagree with all the reviews showing Nvidia low level API performing worst on AMD platform compared to Intel and especially the 1080ti where multiple sites including youtube ones clearly showed it performing badly in some of the low level APIs on AMD platform (I gave you a clear example with RoTR btw)?
You also disagree that said reviews also showed the DX11 to DX12 relationship was better for AMD GPUs on said AMD platform rather than Intel?
And finally you disagree much of the performance gains for AMD in Doom actually relates to the proprietary Vulkan extensions they implemented before Nvidia and was used in the engine?
Testing showed Async Compute provided much less benefit than the low level proprietary Vulkan extensions - Nvidia has started to build their own but these will only be seen in new Vulkan games.

I find it ironic those who upticked you moaned about Nvidia performance crippled on AMD Ryzen platform, and to reiterate no-one knows WTF is the cause and I highly doubt it is purely an Nvidia responsibility to resolve.

So if AMD actually come out in front or equal with the test being done in future on an Intel I7 or HEDT and importantly other modern game not using optimisation so weighted towards one over the other then I would take more note, but this test did favour AMD for reasons I mention.
But for now as the article said it was purely AMD platform:
"Both test systems are identical outside of GPU and Display. AMD Ryzen 7 1800X CPUs with 16GB of 2666MHz RAM."

Anyway what really matters are reviews across many games and both platforms, not long to wait now for that.
As Kyle said at the end of the video this was just a fun test they could do and publish for now.
Cheers
 
Last edited:
They are just as robust, the functionality is there for them, and they have more, I'm sure AMD will catch up quite easily too its just a mesh fabric.....

Infinity fabric doesn't do shit right now for current architectures is what you are not understanding. There is nothing special about it. Chimera heterogeneous systems is actually even better than any other mesh out there right now but its tailored for specific needs and that is what Intel and nV did with theirs. Simple. Just because AMD came out with it and hyped it to the moon doesn't mean its something special that no one else has man. How much did you hear nV talking about NV link? They talked about it but only to the people they needed to talk to about it, like the HCP and DL people.

For typical or general consumers all this mesh tech really doesn't do anything extra at least not right now.


Sorry, I don't believe you are being honest. You are incredibly full of misinformation.


Here is a little free dive. And I quote:

" So what is the big deal about it?


Well, even if you are to leave all the high level talk behind, one of the biggest impacts of Infinity Fabric is that it will allow AMD to fully utilize DRAM available to any SoC or GPU. This means that textbook and theoretical performance limits will be achievable and will result in an overall power efficient architectural design. Secondly if you take a look at the slides, you will notice how the Infinity Fabric is now so much more than just HyperTransport. It is the physical implementation of AMD’s all-encompassing Lego philosophy, for lack of a better word, where everything is fully scalable and 100% flexible.


Infinity Fabric is a coherent implementation which means that cache coherency is maintained across multiple processors externally and scaling up cores, in a CPU or a GPU, is not a problem and only limited by the bandwidth of the transport itself (which we have mentioned above).
This philosophy is in contrast to Intel’s stricter vision of a tailor-made design. It also allows AMD to scale up and down designs within a matter of hours rather than months without spending additional human capital or resources. This flexibility will allow it to serve a larger number of custom clients than anyone else.

AMD’s Infinity Fabric is basically divided into two distinct components or philosophies.
Data Fabric scalability and Control Fabric scalability.


When we usually talk about scalability, we are talking about the Data Fabric portion of things. This includes the HyperTransport concept, and the scalability in terms of cores/CPUs/dies etc. Needless to say we have already seen firsthand how well AMD is able to handle a diversified range of custom solutions.

The second offshoot is called the Control Fabric and is something that is newer, and very interesting. It extends the same concept but on a more intelligent level. For example, Ryzen will have machine learning integrated into the processor design so it will get modestly better at recurring tasks. Intel also has its own branch of prediction management but lets not get into that right now. This is just one example of the intelligent approach to the control design that has been taken as far as Infinity Fabric is concerned.



All of these things combined means that we will be seeing products from AMD that are constructed using a unified, flexible platform that uses a similar building block in just about everything. Oh and compatibility will be the cornerstone of this design implementation. I assume this would only solidify AMD’s position in the console market because the company will literally be handling them all more flexibility than they could ever ask for. "



I already suggested you didn't know much, but THAT^ pretty much negates all of the BS you have been trying to spew. I won't play your game.
 
Sorry, I don't believe you are being honest. You are incredibly full of misinformation.


Here is a little free dive. And I quote:





I already suggested you didn't know much, but THAT^ pretty much negates all of the BS you have been trying to spew. I won't play your game.


Did you read what a CPU designer, engineer just stated about mesh technology?

As a former CPU designer, I would like indicate that the role Infinity Fabric itself will play in multi-unit performance will not be a differentiator. That's the (relatively) easy part - all companies in question are quite capable of making something which fills the same role as well.

I'm not knocking it - its a fine interconnect design. But meaning no disrespect to the designers - that's a relatively straightforward bit of tech that got a very nice marketing name.


You don't believe people that actually work with this shit day in day out. But you will believe marketing?

Great the job of marketing is to lie or stretch the truth and simplify things for you to believe in their crap so you buy their product.

https://community.amd.com/thread/211126

This where you get your information from and AMD blog that is based on marketing slides?

But you don't believe in people that have real world experience with the technology at a much lower level then you are not willing to take the time to understand?

Wow you don't even know what the game is about or what it is and you make dismissive remarks. Good, I don't want you play my game and if you buy the game at any time and I find out its you, I will have the code deactivated. I would rather have level headed people buying my product or using my product, which might actually come out for free.... than people like you.

And ps if you watch NBC or USA or any of the other 15 affiliates or channels, or any Universal movies, you are already paying for my salary lol. So doesn't really matter if you buy a game that I am working on, on my own time either. Any which way you pay for me :)
 
Last edited:
Did you read what a CPU designer engineer just stated about mesh technology?




You don't believe people that actually work with this shit day in day out. But you will believe marketing?

Great the job of marketing is to lie to you so you believe in their crap.

https://community.amd.com/thread/211126

This where you get your information from and AMD blog that is based on marketing slides?

But you don't believe in people that have real world experience with the technology at a much lower level then you are not willing to take the time to understand?


Nobody cares about Intel's mesh bro !
You are simply using that to downplay Vega and AMD's Infinity Fabric. And you are doing a very bad job of it, because Intel doesn't do GPUs !! He is not talking about APU fabric, or GPU fabric, he was talking about Interconnect bus between CPUs.


You are once again trying to spread misinformation. Why not just hang out here and let this thread roll with ideas. Why so negative ?
 
Nobody cares about Intel's mesh bro !
You are simply using that to downplay Vega and AMD's Infinity Fabric. And you are doing a very bad job of it, because Intel doesn't do GPUs !! He is not talking about APU fabric, or GPU fabric, he was talking about Interconnect bus between CPUs.


You are once again trying to spread misinformation. Why not just hang out here and let this thread roll with ideas. Why so negative ?


Its not misinfromation, First off you lied about what Raja Koduri stated about mGPU, and AMD's plans of execution are regarding it and how little infinity fabric plays a role in that. You tried to play it off as if it was paramount to going full transparency with mGPU, its not even needed for it. All you need is the available bandwidth, which can be done with ANY type of bi directional interconnect.

if you what to know what Raja Koduri stated about it, It was the first Capcacian and Cream event in 2016. As I stated I know and remember what he stated. Go look it up.
 


22:30

We believe its here to stay


We have to make the software and hardware infrastructure better.

Sli and crossfire were made so dev's don't need to do work.

rendering approaches now don't work with SLi and Crossfire.

They need to get developers on board earlier now to push mGPU.

This is where GPU's are at right now
paraprased

We are at that inflection point
We are in a much better place than CPU's

Next 5 years golden age of mGPU

Do I need to qoute any more from Raja. Or were you just making shit up sine wave?

So you have the leader of the RTG group saying mGPU is here to stay and needs developer intervention to make it happen and will be the best time for next five years, pretty much 2 to 3 gens of products. Where is Vega in this well 1 year after, so still got 4 more years to go. Whats there, Vega 2.0, Navi, and then another architecture after Navi, then maybe another architecture. Where does infinity fabric play into this? It doesn't man!

You think you know more then him? Or a CPU engineer? or a programmer that works with these things?
 
Last edited:
Nobody cares about Intel's mesh bro !
You are simply using that to downplay Vega and AMD's Infinity Fabric. And you are doing a very bad job of it, because Intel doesn't do GPUs !! He is not talking about APU fabric, or GPU fabric, he was talking about Interconnect bus between CPUs.


You are once again trying to spread misinformation. Why not just hang out here and let this thread roll with ideas. Why so negative ?
Alright...I'm relatively new here, so guys please correct me if I'm out of line and mods feel free to edit this post if necessary...but...sine wave...are you AMD viral? You seem completely way over the top optimistic about their products. First you talked about a dual Vega card, thinking it would be better than the GTX 1080 Ti and potentially Volta also. Now you're talking about infinity mesh which is still in development and several years away from being deployed by either AMD or nVidia. You also had a former CPU engineer tell you that it's not going to work, yet you are continuing to carry on. I just don't understand it. Vega is looking like the biggest GPU flop we have seen in several generations. I mean, it's not Matrox Parhelia bad but it's up there with the real turkey GPUs of all time. Yet you're here as the eternal optimist. Not just the eternal optimist. It's way over the top.

Please don't take that the wrong way but you're really raising my suspicions.
 
Last edited:
Alright...I'm relatively new here, so guys please correct me if I'm out of line and mods feel free to edit this post if necessary...but...sine wave...are you AMD viral? You seem completely way over the top optimistic about their products. First you talked about a dual Vega card, thinking it would be better than the GTX 1080 Ti and potentially Volta also. Now you're talking about infinity mesh which is still in development and several years away from being deployed by either AMD or nVidia. You also had a former CPU engineer to tell you that it's not going to work, yet you are continuing to carry on. I just don't understand it. Vega is looking like the biggest GPU flop we have seen in several generations. I mean, it's not Matrox Parhelia bad but it's up there with the real turkey GPUs of all time. Yet you're here as the eternal optimist. Not just the eternal optimist. It's way over the top.

Please don't take that the wrong way but you're really raising my suspicions.


Don't think he has anything to do with AMD, he just believes the marketing hype....... Even after showing him AMD's plans on mGPU in the next few years.
 
Did you read what a CPU designer, engineer just stated about mesh technology?




You don't believe people that actually work with this shit day in day out. But you will believe marketing?

Great the job of marketing is to lie or stretch the truth and simplify things for you to believe in their crap so you buy their product.

https://community.amd.com/thread/211126

This where you get your information from and AMD blog that is based on marketing slides?

But you don't believe in people that have real world experience with the technology at a much lower level then you are not willing to take the time to understand?

Wow you don't even know what the game is about or what it is and you make dismissive remarks. Good, I don't want you play my game and if you buy the game at any time and I find out its you, I will have the code deactivated. I would rather have level headed people buying my product or using my product, which might actually come out for free.... than people like you.

And ps if you watch NBC or USA or any of the other 15 affiliates or channels, or any Universal movies, you are already paying for my salary lol. So doesn't really matter if you buy a game that I am working on, on my own time either. Any which way you pay for me :)
That isn't entirely true. Not trying to belittle his engineering experience or know how, but in the case of where I work, they have done things one way for 2 decades before I got there. Then I found new ways that they hadn't thought of and changed the way it is done. There is never an absolute when it comes to tech. The only limitation is the imagination of the designer/engineer. That isn't to say every idea will pan out in real world usage as we saw with GCN for the first 3 years, maybe 4. Hell if it hadn't been for Dice we may still not have utilized the untapped power within GCN/7970.
 
That isn't entirely true. Not trying to belittle his engineering experience or know how, but in the case of where I work, they have done things one way for 2 decades before I got there. Then I found new ways that they hadn't thought of and changed the way it is done. There is never an absolute when it comes to tech. The only limitation is the imagination of the designer/engineer. That isn't to say every idea will pan out in real world usage as we saw with GCN for the first 3 years, maybe 4. Hell if it hadn't been for Dice we may still not have utilized the untapped power within GCN/7970.

Well as I stated, mesh technology is the first part of the equation. Yeah there needs to be something there to do that work, which has been made, but in its current state is pretty much whatPhaseNoise is getting at, is not going to be enough to make much difference. I even stated this and pointed out why, with Ryzen and CCX's communicated over infinity fabric has too much latency to cover up. But there needs to be many other things to happen before full transparency happens with multi die technologies on the die side itself, which I went through point blank. This guy just doesn't want to believe in anything outside of AMD marketing. marketing isn't going to tell you everything they need to do to make such a solution, and its damn hard work. Its going to cost transistors, many more then they can allocate now for the reasonable die sizes to bring to market. Eventually it will happen, but not for a couple more gens, pretty much another node or two. And it might not even happen for games initially, only for HPC units first because of the cost ramifications of such a device.
 
Its not misinfromation, First off you lied about what Raja Koduri stated about mGPU, and AMD's plans of execution are regarding it and how little infinity fabric plays a role in that. You tried to play it off as if it was paramount to going full transparency with mGPU, its not even needed for it. All you need is the available bandwidth, which can be done with ANY type of bi directional interconnect.

if you what to know what Raja Koduri stated about it, It was the first Capcacian and Cream event in 2016. As I stated I know and remember what he stated. Go look it up.


I am sorry, but I have no reason to lie.

And once again, you are trying to spread misinformation, or are once again attempting to downplay and misrepresent things. It's obvious to me, you are purposely looking past the other glaring remarks Raja has also said. But in your defense, seeing you know very little about AMD and more about Nvidia I will cut you some slack/rope.

Start Here: ( OC3d )
" Infinity Fabric allows us to join different engines together on a die much easier than before. As well it enables some really low latency and high-bandwidth interconnects. This is important to tie together our different IPs (and partner IPs) together efficiently and quickly.

It forms the basis of all of our future ASIC designs.

We haven't mentioned any multi-GPU designs on a single ASIC like Epyc, but the capability is possible with Infinity Fabric. "


-Raja Koduri

See how he hints?
But, even so. In your own exuberance you have ignorantly assumed, that the quote I was talking about, is the one you mentioned/linked. It was not !

It was during a "Ask Me Anything" Raja had said this when repeatedly asked about possible FE's drivers and games (referring to RX), etc. He responded to a slump of questions and said this:

"My software team wishes this was true:)

(RX) Vega is both a new GPU architecture and also completely new SOC architecture. It's our first InfinityFabric GPU as well ! "


-Raja Koduri

The ironic thing here is, that I already told you Raja said that. You are not arguing with me, but Raja. You have essentially been calling Raja a liar this entire thread.

Again, I don't think you are being fair to fellow members, or AMD. It is obvious you are trying to downplay anything they have done, while attacking anyone who is willing to discuss AMD's technology. You are here (in this thread) under false pretenses, and you are having a hard time accepting the fact that neither Intel of Nvidia has consumer products about to release with Infinity fabric.




New Vega Rumor: (warning, might trigger some people)
AMD is going to announce a new Radeon RX Vega 64 x2 w/4 stacks of HBM2 and it is going to be named TitanRipper !

Nov '17
 
I am sorry, but I have no reason to lie.

And once again, you are trying to spread misinformation, or are once again attempting to downplay and misrepresent things. It's obvious to me, you are purposely looking past the other glaring remarks Raja has also said. But in your defense, seeing you know very little about AMD and more about Nvidia I will cut you some slack/rope.

Start Here: ( OC3d )



But, in your own exuberance you have ignorantly assumed, that the quote I was talking about, is the one you mentioned/linked. It was not !

During a "Ask Me anything" Raja has said this when repeatedly asked about possible FE's drivers and games (referring to RX), etc. He responded to a slump of questions and said this:



The ironic thing here is, that I already told you Raja said that. You are not arguing with me, but Raja. You have essentially been calling Raja a liar this entire thread.

Again, I don't think you are being fair to fellow members, or AMD. It is obvious you are trying to downplay anything they have done, while attacking anyone who is willing to discuss AMD's technology. You are here (in this thread) under false pretenses, and you are having a hard time accepting the fact that neither Intel of Nvidia has consumer products about to release with Infinity fabric.




New Vega Rumor: (warning, might trigger some people)
AMD is going to announce a new Radeon RX Vega 64 x2 w/4 stacks of HBM2 and it is going to be named TitanRipper !

Nov '17


He never mentioned what infinity fabric will do for Vega, you are assuming he is saying the same thing you are. But he isn't, he knows it takes software development to get mGPU to work, if that wasn't the case, they wouldn't make a separate mGPU group specific for driver optimizations and to work with developers with mGPU.

Do you want to put words into his mouth and making things up so making him the liar? Hmm. No doesn't work that way. he stated one thing, it uses infinity fabric, that means shit all.


multi GPU on the same interposer, that is definitely possible, ya don't need infinity fabric for that ;) man, he didn't even say its necessary, the said it can be done with infinity fabric just the same way Epyc is done. He also didn't get into anything about the software needs. Its still going to need mGPU!

You are fancifully filling in lines that aren't there with your fantasies without any basis of technical know how.
 
Correct, that also mean all your speculation up to this point means shit all. And you have 8500 posts.
 
what I'm not speculating, I'm basing it off of solid information based on what has been done and what current tech can do lol. And what Raja, AMD have stated, and nV has stated. Which I don't even need to listen to. Talking about post counts, instead of looking at post counts, go through my posts about this topic over a year ago, even before Raja mentioned anything about Polaris, mGPU, when the first talks about infinity fabric came out. Said the same thing back them man. Even with nV's nVlink, same thing.

8500 posts with more than 4000 likes guess what, pretty good ratio ;)

You have 48 posts, with half of them just useless AMD marketing diatribe added with your own inadequate understanding of what they are saying. Big ass difference.

You can't talk about anything else so now you want to talk about post count, yeah not going to fly man.

You should write this in your sig,

"I know more than Raja, or any CPU EE or programmer about AMD's current and future products even though it doesn't make sense. My 48 posts of nonsense gives me an e-penis the size of John Holmes"

Just whip it out there man you have a big one.
 
Last edited:
No need to defend yourself.
You are stating, you don't think so. I am stating, I think so..

The tell-tale and extent of your knowledge is illustrated every time you try to bolster your remarks, when you have to revert to using Intel's Mesh, or Nvidia's mesh to talk about AMD.



So, lets stick to AMD stuff. So what about cache coherency between two GPUs ??

WCCFTECH
Vega’s memory and cache design is very unique. The architecture now features a single “High Bandwidth Cache Controller” that directly manages data going in and out of the level 2 cache and High Bandwidth Memory. This includes three hierarchies.

1 – The L2 cache itself

2 – The on-board second generation High Bandwidth Memory pool. Which now takes on a role very similar to an L3 cache. This is where you’d find HBM and GDDR5 memory in a typical graphics chip, managed by its own separate controller. Bringing this storage pool under the same umbrella as the L2 cache reduces latency, improves power efficiency and facilitates a more fluid movement of large data in and out of the graphics engine. In fact, AMD doesn’t even like to refer to this pool as “memory” anymore and instead calls it a “High Bandwidth Cache”.

3- Here we have network storage, system DRAM and NV RAM. Basically all memory not in close proximity to the GPU. This enables the memory architecture to support up to a 512TB virtual address space.

Finally, in Vega the render back-ends also known as Render Output Units or ROPs for short are now clients of the L2 cache rather than the memory pool. This implementation is especially advantageous in boosting performance of games that use deferred shading.



Care to backpedal that ??
 
The ironic thing here is, that I already told you Raja said that. You are not arguing with me, but Raja. You have essentially been calling Raja a liar this entire thread.

Oh.. Raja it's literally a liar.. the only fact that he mention VEGA as a new GPU architecture destroy all your stupid arguments as that is a straight lie.;)
 
No need to defend yourself.
You are stating, you don't think so. I am stating, I think so..

The tell-tale and extent of your knowledge is illustrated every time you try to bolster your remarks, when you have to revert to using Intel's Mesh, or Nvidia's mesh to talk about AMD.



So, lets stick to AMD stuff. So what about cache coherency between two GPUs ??

WCCFTECH




Care to backpedal that ??


Now you are going to quote WTF tech?


Cache coherency was done with nV link a year before infinity fabric lol! Did it change anything! NO!

Should I back pedal when I already know more information about this than you?

Care to point to something that isn't WTF tech? And again miss guided because that wasn't even talking about gaming, its talking about reach the exabyte limit, the new computer arms race.

Did you even know or listen to reach the exabyte computing power for computer presentation, its not about gaming man.

Common sense man, they spent a butt load of money and still spending a butt load of money on mGPU with their new mGPU group, why would they do that if that is not needed with Vega? Come on even a person that doesn't understand that technical aspects can understand money and if its being pumped into mGPU, that means its gonna be there for a long time.

More marketing slides too? Yeah where are the details on how you proposed mGPU will be transparent with Vega? Those slides don't mention any of that do they? They don't even broach the topic..... Why? Cause can't do it with infinity fabric at current tech. You are just wishing your thoughts are real, they aren't. If it was so, you know AMD would be touting it not only that, they wouldn't make such a big Vega chip..... See money talks BS walks. Smaller Vega chip, less power usage, 2 of them should be able to compete well in the high end if there is no need for mGPU and it will cost less with increased yields. Again, it doesn't fit in with any paradigm you throw at it. Cause its can't be done.
 
Last edited:
Guys I can't claim to know much about this infinity fabric technology, but let me say two things:

- This thread has been completely derailed by discussing this feature which is in all likelihood part of a product that does not exist (dual Vega card)
- My guess is that Vega has it baked in to help with the new Ryzen APUs they are making. They will probably build Vega cores into their Ryzen CPUs to hopefully make a kick ass APU. Again, I don't think the goal here is to make a dual Vega card, although it does sound somewhat possible from an engineering standpoint.
 
Now you are going to quote WTF tech?
Cache coherency was done with nV link a year before infinity fabric lol! Did it change anything! NO!
Should I back pedal when I already know more information about this than you?
Care to point to something that isn't WTF tech? And again miss guided because that wasn't even talking about gaming, its talking about reach the exabyte limit, the new computer arms race.
Did you even know or listen to reach the exabyte computing power for computer presentation, its not about gaming man.


I see... :rolleyes:

So you are ignoring the whole post, because I ask you to stay on topic & speak about AMD's (advanced version) of cache coherency ?? And being unable to expound upon AMD's own technology, You choose to skip the whole discussion, to let everyone know that Nvidia's (anemic version) has been out a year ?? Sounds like deflection to me.

How does your rebuttal help anyone here ? Or have anything to do with Infinity Fabric ?? We are not having a contest of what is better bro, we are in an AMD GPU thread, talking about their GPUs & their new technologies. Stay on the subject!



Ironic & utter fail on your part, that You have repeatedly said AMD can't tap in L2 cache, or this or that. And you are wrong. Subsequently, it doesn't matter what site I link, they all have the same info.
You are just spreading fud in AMD Vega threads. Nothing more to discuss with you.

Here it is again:

AMD's High Band Cache Controller
Vega’s memory and cache design is very unique. The architecture now features a single “High Bandwidth Cache Controller” that directly manages data going in and out of the level 2 cache and High Bandwidth Memory. This includes three hierarchies.

1 – The L2 cache itself

2 – The on-board second generation High Bandwidth Memory pool. Which now takes on a role very similar to an L3 cache. This is where you’d find HBM and GDDR5 memory in a typical graphics chip, managed by its own separate controller. Bringing this storage pool under the same umbrella as the L2 cache reduces latency, improves power efficiency and facilitates a more fluid movement of large data in and out of the graphics engine. In fact, AMD doesn’t even like to refer to this pool as “memory” anymore and instead calls it a “High Bandwidth Cache”.

3- Here we have network storage, system DRAM and NV RAM. Basically all memory not in close proximity to the GPU. This enables the memory architecture to support up to a 512TB virtual address space.

Finally, in Vega the render back-ends also known as Render Output Units or ROPs for short are now clients of the L2 cache rather than the memory pool. This implementation is especially advantageous in boosting performance of games that use deferred shading.
*****


Honestly, not sure why you are angry and combative (& have the need to spread fud) over AMD stuff, but as a gamer I can't wait for this premium technology to hit the shelves. How can you even harp on AMD for this ??

AMD's HBCC tied to HBM2 in a Vega x2 SOC design smothered in AMD's secret sauce ??
Sounds like TitanRipper to me !!
 
Sounds like TitanRipper to me !!
You keep dreaming about that dual Vega card. I will stay grounded in reality and enjoy gaming on my GTX 1070 SLI setup. If and when AMD releases something better than what I have now for a decent price, I will be in for one. The more likely scenario, however, is that Volta is going to come out and relegate AMD to obscurity in terms of GPU technology. It could be the death blow. AMD is already pretty much relegated to the consoles. Actually speaking of consoles I was reading an article by an analyst who thinks that the Xbox Scorpio is going to fail because only 10% of US households own a 4k TV and it's too expensive.
 
Didn't Raja try to peddle 2 x 480 as being faster than 1080 early on? Yeah he's totally honest and forthright. AMD fans are funny, they realize Vega is a disaster so now they start coming up with a "TitanRipper" fantasy--how long until Navi saves the world? I'ts always the same story with AMD and their fans..."just wait.."

P.S. Since we're making shit up, don't be shocked to see consumer Volta by the end of this year which would all but bury AMD for good.
 
Last edited:
NO optimizations for CPU's will not solve this problem, AMD's own tests of Epyc with Anandtech show this problem and AMD stated they might be able to get oh 10 to 15% performance up lift, but Epyc is behind by 50%, it takes 50% performance hits in specific apps. Don't even try to BS around that one, that is what AMD stated. We even talked about this in another thread and I linked the article as well.

It gets really tiresome when AMD says something, and people seem to not take their word for it when its pretty much a negative comment about their own products. That is the only time when AMD is ever truthful about their products, when they themselves point out weakness in their hardware.

It's like you dont read what you type man, you just said that AMD think they can increase performance by 10% to 15% by tweaking it. Yeah we were aware that NUMA would hurt it and AMD had stated that, simple fact remains it can be improved some. I only saw like 2 benchmarks where Epyc suffered and yeah if your servers do that particular operation all the time then yeah I would stick with Intel. There was also a couple benchmarks where Epyc beat Intel pretty good and thus would be a good investment for some. We all get tired of your never ending negativity but I doubt that is changing anytime soon. All you do is focus on the negative and blow it out of proportion yet I never see you in a Intel thread discussing the fact that the new Skylake chips are performing worse then Kabylake chips due to the use of mesh technology and the change in the cache arrangement.
 
You keep dreaming about that dual Vega card. I will stay grounded in reality and enjoy gaming on my GTX 1070 SLI setup. If and when AMD releases something better than what I have now for a decent price, I will be in for one. The more likely scenario, however, is that Volta is going to come out and relegate AMD to obscurity in terms of GPU technology. It could be the death blow. AMD is already pretty much relegated to the consoles. Actually speaking of consoles I was reading an article by an analyst who thinks that the Xbox Scorpio is going to fail because only 10% of US households own a 4k TV and it's too expensive.


Sorry, that is not true.

I am not dreaming about anything, I am TRYING to discuss AMD's infinity fabric and the virtues that could possible bring to the high end Gamer. But it seems, that topic keeps getting trolled, and threadcrapped on by people who would rather talk about Nvidia, while badmouthing AMD.

Understand, it is OK to admire and seek knowledge and discussion about Vega technology. Or to even theorize about AMD's technologies and possible path in an AMD titled thread. (It is a Vega Rumor thread, after all.)




Secondly, I already have Nvidia stuff. I want more. Hence my hunger for HEDT GPU.

But what is troubling and tiring is the amount of negative and derogatory remarks being fired off by the same people, ad nauseam, while others are trying to have a legit discussion here. It is annoying and has to stop. Glad Kyle stepped in, because I am oldschool and believe People should have to show their driver's license (or CC#), to open an account. (That way cellphone kiddies need not apply)

Allowing this anti-AMD bashing to continue is just negative experience here at HardOCP. I know some of you get off on the giggles, but your actions are pushing away real readers, with real incomes and real interests. I am not afraid of a face to face, or a meet & greet. But being attacked for pushing an AMD subject, with posts trying to dissuade my buying habits, when ALL I am doing is speaking about technology is getting old. I buy whatever I want, whenever I want. I don't fret it. Not everyone here is a poor college student who dreams about owning SLI Titans. I remember wanting things back then too. The difference now is I can afford what I want ! Subsequently, Volta is insignificant and is 10 months away. because GDDR6 doesn't start production until Q2 2018. Coincidentally, the fact that Nvidia is locked out the console market is a death blow, only to Nvidia. It can only been seen as a boon for AMD, but nice try on your spin..

I am nearing 50 years of age and about to drop some serious cash on some full-on gaming rigs. I have good reason to believe AMD is going to come threw on their Infinity fabric and Vega, & offer us high end Gamer's, a TitanRipper like card later this year.

If Someone is not all for that goodness, then I would wager they are not a Gamer, or are peddling an agenda.
 
Now you are going to quote WTF tech?

That site has been brutal since Fiji launched. The shit they were peddling about Ryzen prior to launch was so damn cringey. They've been pretty OTT with Vega but not nearly as much as with Ryzen.
 
have good reason to believe AMD is going to come threw on their Infinity fabric and Vega, & offer us high end Gamer's, a TitanRipper like card later this year.

If Someone is not all for that goodness, then I would wager they are not a Gamer, or are peddling an agenda.

So what exactly are you hoping for? A dual GPU card based on Vega?
 
No need to defend yourself.
You are stating, you don't think so. I am stating, I think so..

The tell-tale and extent of your knowledge is illustrated every time you try to bolster your remarks, when you have to revert to using Intel's Mesh, or Nvidia's mesh to talk about AMD.



So, lets stick to AMD stuff. So what about cache coherency between two GPUs ??

WCCFTECH




Care to backpedal that ??
You know Volta makes the GPU cache coherent between all meshed V100s and CPU memory let alone the massively increased BW capability and advanced Unified Memory?
Yeah means to get the most out of it you would need Power9 platform, but then for AMD to get anything close to similar would mean needing their AMD platform against Intel so requiring both FE GPU and EPYC from AMD.
And cache coherency with fat nodes mesh/scale out is far from easy to do.

The point is Infinity Fabric is potentially great for APUs, but with regards to dGPU and HPC much of what is offered can be done large scale to date; Pascal P100 are already in supercomputer projects integrated with Omnipath/Bluelink/InfiniBand, then there are already advanced fast flash cache solutions with their own integrations into Power9 and also coming for Intel platforms.
Separately one potentially benefit of HBCC could be power workstations and professional CAD/rendering/visualisation, but then Nvidia has changed the model here by pushing GPU Grid and powerful backend solution, both have viability.

Infinity Fabric though is just a generalised name with various functionality that exists within the HPC space, difference is AMD is looking to pull it all under 1 roof and potentially simplify it; that is the crux and whether it can be more simplified to the more disparate technologies when it comes to the HPC space.
Lets not fool ourselves its purpose is for larger scale/HPC implementations rather than developed for consumer/PC segment (APUs being the exception but this potentially can bridge both segments in the longer future but still not sure if it would displace hybrid dGPU-CPUs nodes).
Cheers
 
Last edited:
Back
Top