Vega Rumors

Do you release nV with CUDA have more features then HSA as a whole by everyone involved? LOL and you say they are behind? Do you release nV's mesh technology was released before Infinity Fabric and has more forward looking features implemented and being used currently? Something AMD is in the process of implementation in future generations of GPU? AMD has no presence in the HPC (HSA) market and you want to sit here and post about things that aren't even widely used in the WORLD?

If you want to look things up before you spout out who is talking shit and who is being obnoxious or who thinks he knows what he is talking about because that person read a few marketing presentations by AMD. that would be a better way to go! You can look at many of the talks here about HSA vs CUDA and AMD vs nV implementation. It has been discussed many times over.

RAJA NEVER STATED MGPU WILL BE REPLACED ANY TIME SOON!

That is BS. What HE STATED WAS, mGPU was the near future, later on it will be replaced with technology that will be transparent that is the ultimate goal, but in the mean time DEVELOPERS WILL NEED TO GET THERE HANDS DIRTY. THERE WAS NO TIME LINE ON THAT IMPLEMENTATION of transparency. Please don't sit here and make things up about what Raja stated, because I can link everything he stated.

If I'm badmouthing the situation when pulling up relevant facts which you can't even discuss because well maybe you just can't understand it? Hmm? You were the one that stated you don't want to mention specifics, so I did, and now you are just talking BS. Counter what I stated please? Instead of this rhetoric and BS AMD marketing campaigns which never were talking about? Its not badmouthing when saying its not going to happen with current tech because of the limitations that the current tech has, its just reality. Also AMD doesn't need this solution to be competitive, if you have two bad chips glued together, you still get one bad solution. AMD needs to rework their core GPU architecture to get anywhere. Not stick two ICBM's in one package and call it a day.

I'm so damn sure about it because I'm a programmer and have been programming multi gpu applications for years and working on them in many different capacities. The entire graphics and programming pipeline. I have also done HPC and AI work in the past too, I am familiar with the capability of software vs hardware, what can be done, and what can't be done currently. This is exactly why I knew about the NUMA issues Ryzen, Epyc, will have when we saw the CCX issues the second the issue with the latency popped up (actually even before that, the results were too obvious to ignore). It was something I saw before, you know with what architecture? Yeah Pentium D. Granted Pentium D had other issues, but a shared front side bus just doesn't work well when latency is concerned, just not enough bandwidth to supply both chips properly.


NUMA can be tweaked and no one has had a reason to optimize for AMD till now and to be honest Epyc has done really well in benchmarks but no design is flawless. Also latency has a ton to do with the distance the signal has to travel and what medium the signal travels in, one of the reasons we started using fiber optics in cars as we added more modules all over the vehicle. Monolithic design has been favored for a reason and that is due to bandwidth but the reality is were getting much better at sending data at blazing speeds and the wall of shrinking nodes is about to be hit. The mother of invention is necessity and AMD, Intel, Nvidia are about to have a great need. I expect the monolithic design is on it's last legs just like it did in the auto industry. At one time the only way to improve power was to add more cylinders and increase piston size then we started to realize there is diminishing gains from going bigger and bigger. As pistons get bigger so do the rods and crank to deal with all the forces and weight and then you need to feed that monster so your valve train gets heavy and hard to fit and your efficiency goes into the toilet. So to increase power we limited piston size and then focused on increasing head flow and limited the length of the crankshaft. Now days we have 4 cylinders that out produce the power of them old v16's and why bigger is not always better. I expect cpu and gpu to design to go to specific use design for MMX,AVX and so on and each of those processors can be optimized to run that code flawlessly. This would require the processor to stitch it all together and I think that is where everyone is heading.
 
Not saying I disagree the panels were probably more important and AMD likely chose certain one for a reason... but not like Vega can do gsync or nVidia can do freesync. So it does include the card.

I would have liked to see how it would have gone with motion blur off. It sounded like nVidia uses more aggressive motion blur IMO.

The motion blur should be the same, I don't think there is anything exposed for either vendor for special motion blur =p
 
Do you release nV with CUDA have more features then HSA as a whole by everyone involved? LOL and you say they are behind? Do you release nV's mesh technology was released before Infinity Fabric and has more forward looking features implemented and being used currently? Something AMD is in the process of implementation in future generations of GPU? AMD has no presence in the HPC (HSA) market and you want to sit here and post about things that aren't even widely used in the WORLD?

If you want to look things up before you spout out who is talking shit and who is being obnoxious or who thinks he knows what he is talking about because that person read a few marketing presentations by AMD. that would be a better way to go! You can look at many of the talks here about HSA vs CUDA and AMD vs nV implementation. It has been discussed many times over.

RAJA NEVER STATED MGPU WILL BE REPLACED ANY TIME SOON!

That is BS. What HE STATED WAS, mGPU was the near future, later on it will be replaced with technology that will be transparent that is the ultimate goal, but in the mean time DEVELOPERS WILL NEED TO GET THERE HANDS DIRTY. THERE WAS NO TIME LINE ON THAT IMPLEMENTATION of transparency. Please don't sit here and make things up about what Raja stated, because I can link everything he stated.

If I'm badmouthing the situation when pulling up relevant facts which you can't even discuss because well maybe you just can't understand it? Hmm? You were the one that stated you don't want to mention specifics, so I did, and now you are just talking BS. Counter what I stated please? Instead of this rhetoric and BS AMD marketing campaigns which never were talking about? Its not badmouthing when saying its not going to happen with current tech because of the limitations that the current tech has, its just reality. Also AMD doesn't need this solution to be competitive, if you have two bad chips glued together, you still get one bad solution. AMD needs to rework their core GPU architecture to get anywhere. Not stick two ICBM's in one package and call it a day.

I'm so damn sure about it because I'm a programmer and have been programming multi gpu applications for years and working on them in many different capacities. The entire graphics and programming pipeline. I have also done HPC and AI work in the past too, I am familiar with the capability of software vs hardware, what can be done, and what can't be done currently. This is exactly why I knew about the NUMA issues Ryzen, Epyc, will have when we saw the CCX issues the second the issue with the latency popped up (actually even before that, the results were too obvious to ignore). It was something I saw before, you know with what architecture? Yeah Pentium D. Granted Pentium D had other issues, but a shared front side bus just doesn't work well when latency is concerned, just not enough bandwidth to supply both chips properly.


er.. "You are so sure of yourself, yet are missing some simple things..." <---- Your post above proves I was right.

It seems you are not seeing the big picture, or are indeed just trying to dismiss AMD. Because Nvidia's mesh (& Intel's for that matter) is junior stuff, & nothing close or as robust as AMD's "Infinity Fabric". You are embarrassing yourself & even more so now, that after complaining and telling everyone within this community that AMD is incapable of doing this, or that (without even knowing the truth), that you are now claiming Nvidia already beat AMD to it. Thanks for the comic relief.


Anyone here who is trying to downplay AMD's infinity fabric, is doing so because they are not an enthusiast !!
 
People chose it over the nVidia setup. How is that not winning?

Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.

Its purely based on perception, apparently you've never been with a pretty girl that was truly fugly once the make-up comes off.
 
2uikcvp.jpg


The image above is of one of the systems that AMD will use at SIGGRAPH to show off Radeon RX Vega.

The motherboard used is MSI Z97 MPOWER MAX AC (Intel Z97).

So, one has to wonder why AMD is using a Haswell/Broadwell system to show off Radeon RX Vega.
 
So double 8-pins, either there for stability, or its definitely requiring more power than 1080 (1080 has 8+6).

Doesn't look like there is an AIO in there, or if there was, it got dismantled.
 
So double 8-pins, either there for stability, or its definitely requiring more power than 1080 (1080 has 8+6).

Doesn't look like there is an AIO in there, or if there was, it got dismantled.

those are probably there if you want to overclock. I think base power is 275w TDP on vega gaming from the leaks.
 
2uikcvp.jpg


The image above is of one of the systems that AMD will use at SIGGRAPH to show off Radeon RX Vega.

The motherboard used is MSI Z97 MPOWER MAX AC (Intel Z97).

So, one has to wonder why AMD is using a Haswell/Broadwell system to show off Radeon RX Vega.
I would say most systems are Intel, so having an Intel system maybe a good call. Actually having a variety of systems would probably be better. RTG stuff needs to support Intel just as well as AMD systems.
 
those are probably there if you want to overclock. I think base power is 275w TDP on vega gaming from the leaks.
Radeon Chill maybe put to good use with Vega ;). Nice looking cards even if not a complete view.
 
I would say most systems are Intel, so having an Intel system maybe a good call. Actually having a variety of systems would probably be better. RTG stuff needs to support Intel just as well as AMD systems.

Right, but why Haswell?
 
Right, but why Haswell?
Luck of the draw, works best? I don't know, I don't think it matters at all. There will be plenty of reviews in all sorts of systems shortly. Now I would like to see some comparison between Intel system and Ryzen systems.
 
So we have now officially went from "It's not a gaming card you ding!"® to "Wait for FineWine drivers you mong!"™ and now to "But Infinity Buzzword!"©

Can't we just wait 3 days before spinning fairy tales? I'm deeply pessimistic based on what we know so far (mainly Vega FE performance), but willing to be surprised.
 
If you are offering then you owe him about $1500.
If i was offering i would ask him to cite every time i said X was impossible. Since i have only 1500 posts, it should not take long to C-f through it.

So no, i am asking on whether the same fairy that will make Vega more power efficient than 1080 Ti gave him the $1 or not.
 
People chose it over the nVidia setup. How is that not winning?

Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.
You mean it won in the scenario:
a) Doom where AMD currently has more extensions for low level access (Nvidia now implementing their own proprietary extensions to improve Vulkan low level performance but they will only come into force with new Vulkan games).
..... Real world is more than just Vulkan and one game using extensions that gave a performance edge at the time for AMD
b) Both were on an AMD platform only, where we know AMD GPUs funny enough perform better in DX12/low level APIs on this platform when compared to Intel platform (DX12 gains over DX11 relationship in test comparisons).
..... Real world most buying Nvidia GPUs probably use them with Intel platform, especially as it is came out for whatever reason Nvidia GPUs are having some kind of problem on AMD platform when it comes to low level APIs, and especially the 1080ti (see DX12 RoTR for an example)

Sorry but no conclusion can come from that, it is the worst scenario for Nvidia and I would not put any weight for a conclusion even if it was the other way round heavily favouring Nvidia.
Cheers
 
Last edited:
I am glad you have scaled down your argument, to something less obnoxious.

I agree with you on the most part and understand the complexities involved. I just do not think you are grasping how far out ahead AMD in this department. It has nothing to do with Nvidia, but AMD is at the forefront of Heterogeneous Computing (HSA). And have an answer for the many things you claim they don't. Also, lets not forget that AMD has been pursuing associated technology, that fit nicely into their HSA ecosystem. AMD is simply headed in a different direction than Nvidia.

You are so sure of yourself, yet are missing some simple things & unable to admit that Raja said it's coming. Even so, I don't even see you speculating on HOW, or WHAT infinity fabric could mean for Gamers. Or allowing yourself to speak about the goodness it could bring. All you want to do is badmouth the situation, and claim AMD is less competent than you. All the while, I personally watch a Developer on stage at C&C claim near 100%n scalability using Vega. Didn't that ring a bell to anyone, of where AMD might be going with their RX Gaming line, in the immediate future. I foresee AMD making "TitanRipper" happen before Xmas. Then leverage Vega x2 on their push to 7nm Navi, with further advancements in Infinity Fabric (Infinity Fabric 2.0). In a tick-tock cadence. I bet we might even see a Vega x4 at some point nearing 2019.


I'll trust what Dr Su & Raja have said, over you. Sorry !

HSA was created to compete against Nvidia/Intel......
Here is ThreRegister when one of the HSA leaders left AMD to join Nvidia back in 2015.
Rogers is no longer listed as president of the Heterogeneous System Architecture Foundation, an anti-Nvidia-Intel consortium that includes the usual non-Chipzilla gang: ARM, Imagination, Mediatek, Qualcomm, Samsung, and so on. The group's goal is to create an architecture that melds graphics accelerators and application processors together into the same virtual memory address spaces so software can easily use GPUs to speed up tasks. It should also drive down power usage by simplifying the design of systems with fewer buses and controllers.

In March, the foundation published its specification for making HSA-compatible processors and chipsets. AMD's Carrizo family of system-on-chips, revealed in February, are HSA-compliant.

TBH this will only impact Nvidia against AMD when it comes to APU solutions rather than GPUs.
Nvidia is actually ahead when it comes to separate GPU-CPU coherent cache/memory management with Pascal against previous AMD generation, and tbh Vega FE is up against Volta that improves the coherent cache/memory management even further along with enabling 6 V100 GPUs to be fully meshed together (NVLink2).
Put it this way, one of the very recent announcements regarding research labs in Australia (CSIRO) signed a contract with Dell to provide a small supercomputer (only $4 million), the bids went out in late 2016 and start be built late 2017 and rather than pushing with the HSA of Vega they went with the mesh distribution of P100 combined with Intel CPUs.
Bear in mind Dell is building a relationship also with AMD.
Similar situation to that was also in Japan with TSUBAME3.0 that went with a balanced-distributed P100 coherent setup.
Nearly all supercomputers being built/planned based upon meshed GPUs and coherent cache/unified memory is either P100 or V100, so AMD still has a challenge outside of the APU market in these fat node meshed solutions.
Cheers
 
Last edited:
NUMA can be tweaked and no one has had a reason to optimize for AMD till now and to be honest Epyc has done really well in benchmarks but no design is flawless. Also latency has a ton to do with the distance the signal has to travel and what medium the signal travels in, one of the reasons we started using fiber optics in cars as we added more modules all over the vehicle. Monolithic design has been favored for a reason and that is due to bandwidth but the reality is were getting much better at sending data at blazing speeds and the wall of shrinking nodes is about to be hit. The mother of invention is necessity and AMD, Intel, Nvidia are about to have a great need. I expect the monolithic design is on it's last legs just like it did in the auto industry. At one time the only way to improve power was to add more cylinders and increase piston size then we started to realize there is diminishing gains from going bigger and bigger. As pistons get bigger so do the rods and crank to deal with all the forces and weight and then you need to feed that monster so your valve train gets heavy and hard to fit and your efficiency goes into the toilet. So to increase power we limited piston size and then focused on increasing head flow and limited the length of the crankshaft. Now days we have 4 cylinders that out produce the power of them old v16's and why bigger is not always better. I expect cpu and gpu to design to go to specific use design for MMX,AVX and so on and each of those processors can be optimized to run that code flawlessly. This would require the processor to stitch it all together and I think that is where everyone is heading.

NO optimizations for CPU's will not solve this problem, AMD's own tests of Epyc with Anandtech show this problem and AMD stated they might be able to get oh 10 to 15% performance up lift, but Epyc is behind by 50%, it takes 50% performance hits in specific apps. Don't even try to BS around that one, that is what AMD stated. We even talked about this in another thread and I linked the article as well.

It gets really tiresome when AMD says something, and people seem to not take their word for it when its pretty much a negative comment about their own products. That is the only time when AMD is ever truthful about their products, when they themselves point out weakness in their hardware.
 
Last edited:
how would scaling exceed? Please give me examples, cause if you drop frequency lets say on a dual GPU part you scaling still goes based on that drop + the advantages of the drop in voltage needs. Scaling of performance though is still based on the frequency, which won't be linear in AMD cases. But there is nothing to exceed anything, its understood those things will happen.
Cores will be linear. Twice the cores and 25% lower clocks will increase throughput. I said exceed because current designs may be rather high on the power curve.

Yes you are correct, but you are not going to get double the GPU performance over a single card either, because if you have less power consumption, you have to drop frequency and thus voltages.
Double no, but not far off of it. 50% high throughput at less power wouldn't be unreasonable.

As for the whole MCM not viable, every point you mentioned from that paper AMD already addressed with Vega features. Work distribution with binning, data locality with HBCC, bandwidth is there. Making it work is relatively simple with those features. Maintaining performance may be tricky.

Luckily HBCC and tiled raster as a NUMA optimization work really well to reduce that interconnect bandwidth need.
 
You mean it won in the scenario:
a) Doom where AMD currently has more extensions for low level access (Nvidia now implementing their own proprietary extensions to improve Vulkan low level performance but they will only come into force with new Vulkan games).
..... Real world is more than just Vulkan and one game using extensions that gave a performance edge at the time for AMD
b) Both were on an AMD platform only, where we know AMD GPUs funny enough perform better in DX12/low level APIs on this platform when compared to Intel platform (DX12 gains over DX11 relationship in test comparisons).
..... Real world most buying Nvidia GPUs probably use them with Intel platform, especially as it is came out for whatever reason Nvidia GPUs are having some kind of problem on AMD platform when it comes to low level APIs, and especially the 1080ti (see DX12 RoTR for an example)

Sorry but no conclusion can come from that, it is the worst scenario for Nvidia and I would not put any weight for a conclusion even if it was the other way round heavily favouring Nvidia.
Cheers

What? How does any of what you say matter to the test that was done? You are trying to imply that the test favoured AMD somehow, when Kyle himself says that this wasn't the case at all. He goes out of his way to make sure this wasn't case.

The 1080ti can push Doom at 100+ FPS at the resolution used. Read the HardOCP review back in March, it said that the 1080ti ran Doom beautifully and could get nearly 80fps average at 4k and max settings. Since then there have been performance improvements and driver fixes from Nvidia that push that figure higher.

Whereas We know from the FE benchmarks that Vega is only slightly faster than the 1080. And you experts have been saying that RX Vega is only going to be slightly faster than the FE edition. That's still going to be 20% slower than the 1080ti.

Which means in the test at 3440x1440, the min frame rate of the 1080ti is going to stay above 60fps whereas the min frame rates of the Vega system are going to dip below 60fps.

How can it be worst possible case for Nvidia when their card has the best performance in that game?

How can it be the worst possible case for Nvidia if Gsync is better than Freesync? (and have seen several people make this claim)

How can it be the worst possible case for Nvidia when you are putting a $1300 monitor against a $700 one?

My god, the Nvidia system should have just walked away with this test, yet out of 10 people, 6 could tell no difference, 3 preferred the AMD system and 1 preferred the Nvidia system. And these aren't just regular joes off the street, these are gamers and people who are working in this industry.

Does this say anything about the final performance of RX Vega? No of course not. It was never meant to be that. The results were surprising though and even Kyle said that.
 
er.. "You are so sure of yourself, yet are missing some simple things..." <---- Your post above proves I was right.

It seems you are not seeing the big picture, or are indeed just trying to dismiss AMD. Because Nvidia's mesh (& Intel's for that matter) is junior stuff, & nothing close or as robust as AMD's "Infinity Fabric". You are embarrassing yourself & even more so now, that after complaining and telling everyone within this community that AMD is incapable of doing this, or that (without even knowing the truth), that you are now claiming Nvidia already beat AMD to it. Thanks for the comic relief.


Anyone here who is trying to downplay AMD's infinity fabric, is doing so because they are not an enthusiast !!


They are just as robust, the functionality is there for them, and they have more, I'm sure AMD will catch up quite easily too its just a mesh fabric.....

Infinity fabric doesn't do shit right now for current architectures is what you are not understanding. There is nothing special about it. Chimera heterogeneous systems is actually even better than any other mesh out there right now but its tailored for specific needs and that is what Intel and nV did with theirs. Simple. Just because AMD came out with it and hyped it to the moon doesn't mean its something special that no one else has man. How much did you hear nV talking about NV link? They talked about it but only to the people they needed to talk to about it, like the HCP and DL people.

For typical or general consumers all this mesh tech really doesn't do anything extra at least not right now.
 
Last edited:
Cores will be linear. Twice the cores and 25% lower clocks will increase throughput. I said exceed because current designs may be rather high on the power curve.

How is that exceeding? When taking a chip out of its ideal range of frequencies and dropping it back down to its ideal range, that is bound to happen. There is nothing to exceed expectations, its called curbing down your expectations with understandable reasons.
Double no, but not far off of it. 50% high throughput at less power wouldn't be unreasonable.

yes and that is nothing special about it, it will happen. Its not exceeding anyones expectations if that happens.

As for the whole MCM not viable, every point you mentioned from that paper AMD already addressed with Vega features. Work distribution with binning, data locality with HBCC, bandwidth is there. Making it work is relatively simple with those features. Maintaining performance may be tricky.

And why is it tricky because latency hiding is pretty much impossible with the latency amount we are talking about! Without the specific problems being solved in their specific areas, nothing will change. It is not viable with current generations GPU's, period, they just will not function well (performance wise) without other changes being done architecturally.

Luckily HBCC and tiled raster as a NUMA optimization work really well to reduce that interconnect bandwidth need.

huh, then it won't be transparent to the developer! And when I say that you guys jump up and down but now you are saying the same thing months later?
 
Last edited:
How can it be the worst possible case for Nvidia if Gsync is better than Freesync? (and have seen several people make this claim)
G-Sync does not work in this test, because as you have said, 1080 Ti almost certainly sat above G-Sync range and was V-synced all the time.
How can it be worst possible case for Nvidia when their card has the best performance in that game?
V-Sync in Kyle's conditions, tearing and maybe even stuttering in parallel universe.
How can it be the worst possible case for Nvidia when you are putting a $1300 monitor against a $700 one?
I would take that $700 monitor with 1080 Ti over the $1300 one any day of the week, it is a better gaming panel, as far as my opinion goes.

I think this blind test shows that experience is often defined by more than just numbers, but we knew that, didn't we?
 
2uikcvp.jpg


The image above is of one of the systems that AMD will use at SIGGRAPH to show off Radeon RX Vega.

The motherboard used is MSI Z97 MPOWER MAX AC (Intel Z97).

So, one has to wonder why AMD is using a Haswell/Broadwell system to show off Radeon RX Vega.

That's a render done by an AMD fan, it's not a SIGGRAPH system.
 
ok now back to mesh technologies

lets dumb this down cause seriously I don't see you guys understanding this stuff at all



Yeah mesh interconnect wise Intel has the same functionality as AMD but Intel's is faster (less latency) but AMD's is cheaper. Intel has even more features too. AMD's is more flexible when it comes to PCI-e lanes.

Starting from 6:00 onward.

So now does it really matter what mesh technology is underneath when the core problems of mGPU isn't solved, the damn fabric means shit?
 
Last edited:
ok now back to mesh technologies

lets dumb this down cause seriously I don't see you guys understanding this stuff at all



Yeah mesh interconnect wise Intel has the same functionality as AMD but Intel's is faster (less latency) but AMD's is cheaper. Intel has even more features too.

Starting from 6:00 onward.

So now does it really matter what mesh technology is underneath when the core problems of mGPU isn't solved, the damn fabric means shit?


So Intel does do something with their piles of money!

Do you think mGPU would run better on Intel? I know it has less latency but it's hard for me to tell what that means to us end users.

I guess the answer is yes because of higher performance in general...
 
So Intel does do something with their piles of money!

Do you think mGPU would run better on Intel? I know it has less latency but it's hard for me to tell what that means to us end users.

I guess the answer is yes because of higher performance in general...


Hmm mGPU only utilizes the CPU to feed the GPU, so no it shouldn't, drivers can take care of any type of latency issues that arise from the cpu side of things, well there needs to be work done on driver side to mask it.
 
As a former CPU designer, I would like indicate that the role Infinity Fabric itself will play in multi-unit performance will not be a differentiator. That's the (relatively) easy part - all companies in question are quite capable of making something which fills the same role as well.

I'm not knocking it - its a fine interconnect design. But meaning no disrespect to the designers - that's a relatively straightforward bit of tech that got a very nice marketing name.
 
G-Sync does not work in this test, because as you have said, 1080 Ti almost certainly sat above G-Sync range and was V-synced all the time.

Are you living in the past? Gsync on and Vsync on means no tearing or input lag when frame rates go above the monitors refresh rate. Nvidia solved these problems with the release of Gsync 2. And I didn't say the 1080ti would sit above the Gsync range, I said it would sit above 60Hz. I said it could push 100FPS+ But was talking about max fps.

V-Sync in Kyle's conditions, tearing and maybe even stuttering in parallel universe.

See above, and this wasn't Kyle's conditions it was Nvidia's, that's how they wanted it done. And it's funny how not one of the 10 mentioned tearing/stuttering or anything like that at all. In fact, 6 of them couldn't see a difference between the two.

I would take that $700 monitor with 1080 Ti over the $1300 one any day of the week, it is a better gaming panel, as far as my opinion goes.

Really? you would take a Freesync monitor to go with your 1080ti? haha, yes, sure you would. Both monitors get great reviews, both have roughly the same input lag and pixel response time, both come highly recommended for gaming, especially at 100hz.

I think this blind test shows that experience is often defined by more than just numbers, but we knew that, didn't we?

And yet so many people have tried to rubbish the idea, even going so far as to say that AMD had the performance advantage and the test was the worst possible scenario for Nvidia.

I didn't see you rush out to correct him though.
 
I guess Vega FE sorta ballparked the performance, anyone else surprised at lack of benchmarks leaked out? Or did they just go past me?
 
Back
Top