Boil
[H]ard|Gawd
- Joined
- Sep 19, 2015
- Messages
- 1,439
...Cat 6 from CAT 5e...
Cat 7 or Bust...!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
...Cat 6 from CAT 5e...
Do you release nV with CUDA have more features then HSA as a whole by everyone involved? LOL and you say they are behind? Do you release nV's mesh technology was released before Infinity Fabric and has more forward looking features implemented and being used currently? Something AMD is in the process of implementation in future generations of GPU? AMD has no presence in the HPC (HSA) market and you want to sit here and post about things that aren't even widely used in the WORLD?
If you want to look things up before you spout out who is talking shit and who is being obnoxious or who thinks he knows what he is talking about because that person read a few marketing presentations by AMD. that would be a better way to go! You can look at many of the talks here about HSA vs CUDA and AMD vs nV implementation. It has been discussed many times over.
RAJA NEVER STATED MGPU WILL BE REPLACED ANY TIME SOON!
That is BS. What HE STATED WAS, mGPU was the near future, later on it will be replaced with technology that will be transparent that is the ultimate goal, but in the mean time DEVELOPERS WILL NEED TO GET THERE HANDS DIRTY. THERE WAS NO TIME LINE ON THAT IMPLEMENTATION of transparency. Please don't sit here and make things up about what Raja stated, because I can link everything he stated.
If I'm badmouthing the situation when pulling up relevant facts which you can't even discuss because well maybe you just can't understand it? Hmm? You were the one that stated you don't want to mention specifics, so I did, and now you are just talking BS. Counter what I stated please? Instead of this rhetoric and BS AMD marketing campaigns which never were talking about? Its not badmouthing when saying its not going to happen with current tech because of the limitations that the current tech has, its just reality. Also AMD doesn't need this solution to be competitive, if you have two bad chips glued together, you still get one bad solution. AMD needs to rework their core GPU architecture to get anywhere. Not stick two ICBM's in one package and call it a day.
I'm so damn sure about it because I'm a programmer and have been programming multi gpu applications for years and working on them in many different capacities. The entire graphics and programming pipeline. I have also done HPC and AI work in the past too, I am familiar with the capability of software vs hardware, what can be done, and what can't be done currently. This is exactly why I knew about the NUMA issues Ryzen, Epyc, will have when we saw the CCX issues the second the issue with the latency popped up (actually even before that, the results were too obvious to ignore). It was something I saw before, you know with what architecture? Yeah Pentium D. Granted Pentium D had other issues, but a shared front side bus just doesn't work well when latency is concerned, just not enough bandwidth to supply both chips properly.
Not saying I disagree the panels were probably more important and AMD likely chose certain one for a reason... but not like Vega can do gsync or nVidia can do freesync. So it does include the card.
I would have liked to see how it would have gone with motion blur off. It sounded like nVidia uses more aggressive motion blur IMO.
Do you release nV with CUDA have more features then HSA as a whole by everyone involved? LOL and you say they are behind? Do you release nV's mesh technology was released before Infinity Fabric and has more forward looking features implemented and being used currently? Something AMD is in the process of implementation in future generations of GPU? AMD has no presence in the HPC (HSA) market and you want to sit here and post about things that aren't even widely used in the WORLD?
If you want to look things up before you spout out who is talking shit and who is being obnoxious or who thinks he knows what he is talking about because that person read a few marketing presentations by AMD. that would be a better way to go! You can look at many of the talks here about HSA vs CUDA and AMD vs nV implementation. It has been discussed many times over.
RAJA NEVER STATED MGPU WILL BE REPLACED ANY TIME SOON!
That is BS. What HE STATED WAS, mGPU was the near future, later on it will be replaced with technology that will be transparent that is the ultimate goal, but in the mean time DEVELOPERS WILL NEED TO GET THERE HANDS DIRTY. THERE WAS NO TIME LINE ON THAT IMPLEMENTATION of transparency. Please don't sit here and make things up about what Raja stated, because I can link everything he stated.
If I'm badmouthing the situation when pulling up relevant facts which you can't even discuss because well maybe you just can't understand it? Hmm? You were the one that stated you don't want to mention specifics, so I did, and now you are just talking BS. Counter what I stated please? Instead of this rhetoric and BS AMD marketing campaigns which never were talking about? Its not badmouthing when saying its not going to happen with current tech because of the limitations that the current tech has, its just reality. Also AMD doesn't need this solution to be competitive, if you have two bad chips glued together, you still get one bad solution. AMD needs to rework their core GPU architecture to get anywhere. Not stick two ICBM's in one package and call it a day.
I'm so damn sure about it because I'm a programmer and have been programming multi gpu applications for years and working on them in many different capacities. The entire graphics and programming pipeline. I have also done HPC and AI work in the past too, I am familiar with the capability of software vs hardware, what can be done, and what can't be done currently. This is exactly why I knew about the NUMA issues Ryzen, Epyc, will have when we saw the CCX issues the second the issue with the latency popped up (actually even before that, the results were too obvious to ignore). It was something I saw before, you know with what architecture? Yeah Pentium D. Granted Pentium D had other issues, but a shared front side bus just doesn't work well when latency is concerned, just not enough bandwidth to supply both chips properly.
People chose it over the nVidia setup. How is that not winning?
Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.
So double 8-pins, either there for stability, or its definitely requiring more power than 1080 (1080 has 8+6).
Doesn't look like there is an AIO in there, or if there was, it got dismantled.
I would say most systems are Intel, so having an Intel system maybe a good call. Actually having a variety of systems would probably be better. RTG stuff needs to support Intel just as well as AMD systems.
The image above is of one of the systems that AMD will use at SIGGRAPH to show off Radeon RX Vega.
The motherboard used is MSI Z97 MPOWER MAX AC (Intel Z97).
So, one has to wonder why AMD is using a Haswell/Broadwell system to show off Radeon RX Vega.
Radeon Chill maybe put to good use with Vega . Nice looking cards even if not a complete view.those are probably there if you want to overclock. I think base power is 275w TDP on vega gaming from the leaks.
I would say most systems are Intel, so having an Intel system maybe a good call. Actually having a variety of systems would probably be better. RTG stuff needs to support Intel just as well as AMD systems.
Luck of the draw, works best? I don't know, I don't think it matters at all. There will be plenty of reviews in all sorts of systems shortly. Now I would like to see some comparison between Intel system and Ryzen systems.Right, but why Haswell?
If i was offering i would ask him to cite every time i said X was impossible. Since i have only 1500 posts, it should not take long to C-f through it.If you are offering then you owe him about $1500.
You mean it won in the scenario:People chose it over the nVidia setup. How is that not winning?
Was it biased? Probably. Do I put much weight into it? Nah. But Vega still won.
I am glad you have scaled down your argument, to something less obnoxious.
I agree with you on the most part and understand the complexities involved. I just do not think you are grasping how far out ahead AMD in this department. It has nothing to do with Nvidia, but AMD is at the forefront of Heterogeneous Computing (HSA). And have an answer for the many things you claim they don't. Also, lets not forget that AMD has been pursuing associated technology, that fit nicely into their HSA ecosystem. AMD is simply headed in a different direction than Nvidia.
You are so sure of yourself, yet are missing some simple things & unable to admit that Raja said it's coming. Even so, I don't even see you speculating on HOW, or WHAT infinity fabric could mean for Gamers. Or allowing yourself to speak about the goodness it could bring. All you want to do is badmouth the situation, and claim AMD is less competent than you. All the while, I personally watch a Developer on stage at C&C claim near 100%n scalability using Vega. Didn't that ring a bell to anyone, of where AMD might be going with their RX Gaming line, in the immediate future. I foresee AMD making "TitanRipper" happen before Xmas. Then leverage Vega x2 on their push to 7nm Navi, with further advancements in Infinity Fabric (Infinity Fabric 2.0). In a tick-tock cadence. I bet we might even see a Vega x4 at some point nearing 2019.
I'll trust what Dr Su & Raja have said, over you. Sorry !
Rogers is no longer listed as president of the Heterogeneous System Architecture Foundation, an anti-Nvidia-Intel consortium that includes the usual non-Chipzilla gang: ARM, Imagination, Mediatek, Qualcomm, Samsung, and so on. The group's goal is to create an architecture that melds graphics accelerators and application processors together into the same virtual memory address spaces so software can easily use GPUs to speed up tasks. It should also drive down power usage by simplifying the design of systems with fewer buses and controllers.
In March, the foundation published its specification for making HSA-compatible processors and chipsets. AMD's Carrizo family of system-on-chips, revealed in February, are HSA-compliant.
NUMA can be tweaked and no one has had a reason to optimize for AMD till now and to be honest Epyc has done really well in benchmarks but no design is flawless. Also latency has a ton to do with the distance the signal has to travel and what medium the signal travels in, one of the reasons we started using fiber optics in cars as we added more modules all over the vehicle. Monolithic design has been favored for a reason and that is due to bandwidth but the reality is were getting much better at sending data at blazing speeds and the wall of shrinking nodes is about to be hit. The mother of invention is necessity and AMD, Intel, Nvidia are about to have a great need. I expect the monolithic design is on it's last legs just like it did in the auto industry. At one time the only way to improve power was to add more cylinders and increase piston size then we started to realize there is diminishing gains from going bigger and bigger. As pistons get bigger so do the rods and crank to deal with all the forces and weight and then you need to feed that monster so your valve train gets heavy and hard to fit and your efficiency goes into the toilet. So to increase power we limited piston size and then focused on increasing head flow and limited the length of the crankshaft. Now days we have 4 cylinders that out produce the power of them old v16's and why bigger is not always better. I expect cpu and gpu to design to go to specific use design for MMX,AVX and so on and each of those processors can be optimized to run that code flawlessly. This would require the processor to stitch it all together and I think that is where everyone is heading.
What is the successor of Polaris?
Does anyone know?
Cores will be linear. Twice the cores and 25% lower clocks will increase throughput. I said exceed because current designs may be rather high on the power curve.how would scaling exceed? Please give me examples, cause if you drop frequency lets say on a dual GPU part you scaling still goes based on that drop + the advantages of the drop in voltage needs. Scaling of performance though is still based on the frequency, which won't be linear in AMD cases. But there is nothing to exceed anything, its understood those things will happen.
Double no, but not far off of it. 50% high throughput at less power wouldn't be unreasonable.Yes you are correct, but you are not going to get double the GPU performance over a single card either, because if you have less power consumption, you have to drop frequency and thus voltages.
You mean it won in the scenario:
a) Doom where AMD currently has more extensions for low level access (Nvidia now implementing their own proprietary extensions to improve Vulkan low level performance but they will only come into force with new Vulkan games).
..... Real world is more than just Vulkan and one game using extensions that gave a performance edge at the time for AMD
b) Both were on an AMD platform only, where we know AMD GPUs funny enough perform better in DX12/low level APIs on this platform when compared to Intel platform (DX12 gains over DX11 relationship in test comparisons).
..... Real world most buying Nvidia GPUs probably use them with Intel platform, especially as it is came out for whatever reason Nvidia GPUs are having some kind of problem on AMD platform when it comes to low level APIs, and especially the 1080ti (see DX12 RoTR for an example)
Sorry but no conclusion can come from that, it is the worst scenario for Nvidia and I would not put any weight for a conclusion even if it was the other way round heavily favouring Nvidia.
Cheers
er.. "You are so sure of yourself, yet are missing some simple things..." <---- Your post above proves I was right.
It seems you are not seeing the big picture, or are indeed just trying to dismiss AMD. Because Nvidia's mesh (& Intel's for that matter) is junior stuff, & nothing close or as robust as AMD's "Infinity Fabric". You are embarrassing yourself & even more so now, that after complaining and telling everyone within this community that AMD is incapable of doing this, or that (without even knowing the truth), that you are now claiming Nvidia already beat AMD to it. Thanks for the comic relief.
Anyone here who is trying to downplay AMD's infinity fabric, is doing so because they are not an enthusiast !!
Cores will be linear. Twice the cores and 25% lower clocks will increase throughput. I said exceed because current designs may be rather high on the power curve.
Double no, but not far off of it. 50% high throughput at less power wouldn't be unreasonable.
As for the whole MCM not viable, every point you mentioned from that paper AMD already addressed with Vega features. Work distribution with binning, data locality with HBCC, bandwidth is there. Making it work is relatively simple with those features. Maintaining performance may be tricky.
Luckily HBCC and tiled raster as a NUMA optimization work really well to reduce that interconnect bandwidth need.
G-Sync does not work in this test, because as you have said, 1080 Ti almost certainly sat above G-Sync range and was V-synced all the time.How can it be the worst possible case for Nvidia if Gsync is better than Freesync? (and have seen several people make this claim)
V-Sync in Kyle's conditions, tearing and maybe even stuttering in parallel universe.How can it be worst possible case for Nvidia when their card has the best performance in that game?
I would take that $700 monitor with 1080 Ti over the $1300 one any day of the week, it is a better gaming panel, as far as my opinion goes.How can it be the worst possible case for Nvidia when you are putting a $1300 monitor against a $700 one?
The image above is of one of the systems that AMD will use at SIGGRAPH to show off Radeon RX Vega.
The motherboard used is MSI Z97 MPOWER MAX AC (Intel Z97).
So, one has to wonder why AMD is using a Haswell/Broadwell system to show off Radeon RX Vega.
That's a render done by an AMD fan, it's not a SIGGRAPH system.
Infinity Fabric isn't new... It's basically HyperTransport on steroids.
ok now back to mesh technologies
lets dumb this down cause seriously I don't see you guys understanding this stuff at all
Yeah mesh interconnect wise Intel has the same functionality as AMD but Intel's is faster (less latency) but AMD's is cheaper. Intel has even more features too.
Starting from 6:00 onward.
So now does it really matter what mesh technology is underneath when the core problems of mGPU isn't solved, the damn fabric means shit?
exactly its just a way of routing data it doesn't control what is routed, when or how.
But it's got new name and everyone likes lemniscates!Infinity Fabric isn't new... It's basically HyperTransport on steroids.
So Intel does do something with their piles of money!
Do you think mGPU would run better on Intel? I know it has less latency but it's hard for me to tell what that means to us end users.
I guess the answer is yes because of higher performance in general...
most likely Navi.
G-Sync does not work in this test, because as you have said, 1080 Ti almost certainly sat above G-Sync range and was V-synced all the time.
V-Sync in Kyle's conditions, tearing and maybe even stuttering in parallel universe.
I would take that $700 monitor with 1080 Ti over the $1300 one any day of the week, it is a better gaming panel, as far as my opinion goes.
I think this blind test shows that experience is often defined by more than just numbers, but we knew that, didn't we?
I guess Vega FE sorta ballparked the performance, anyone else surprised at lack of benchmarks leaked out? Or did they just go past me?