Vega Rumors

Because they are priced approximately the same? Makes sense to compare similar market segments.

They said Vega 64 matches a 1080.

It will also reduce the load on ROPs, TMUs, and cache as it will accelerate culling in some cases. Still need more documentation on what steps are being performed.
the whole point is to keep data in the cache, I don't see how it will reduce the load. If rasterization was a bottleneck then lifting it will increase ROP load if anything.

As I've been somewhat consistently posting in AMD threads, yes it would be accurate that I'm staying on topic and talking about AMD. My interest lies in upcoming technology and trends. As AMD is traditionally well ahead in regards to features and capabilities, that's what I focus on.
But not hardware accelerated primitive discard, tiled rasterization, multiprojection. You went off on an ignorant tirade against 'tensor cores' on B3D in such haste that you forgot to even read what operation they were performing. Kept droning on about tensor products for the love of god.

Just try to explain this thread with all the AMD consumes all this power, Nvidia has better performance/watt, Nvidia supports async, DX12 benchmarks on Nvidia aren't valid, can't compare game using intriniscs because they were left out of the API since Nvidia doesn't support them.
What exactly is it you want me to explain about this thread? I don't follow. NV has better perf/w, supports async. Yes. And? Why are benchmarks in DX12 not valid ? I don't understand, and how can NV not support intrinsics ? They are intrinsic to the architecture, please inform yourself about what shader intrinsics are before making statements like these because I cannot take you seriously at this rate lol.

Nvidia doesnt support intrinsics, Intel doesnt support their own ISA. lol.

Not sure what expectations I'm setting. Half the fun with AMD is the over the top hype that gets generated. I just crunch the numbers and arrive at a sensible conclusion. The only time that isn't the case involves someone taking the statements completely out of context, often in a manner that makes no sense at all.

I'm sorry to shatter your alt-reality but you never arrive at sensible conclusions, you are no different from the people over-hyping shit on r/AMD; you're just better spoken and give the impression that you are better informed.

No need for facts? Unbiased, educated speculation? Just about any other forum would have a much higher signal to noise than this one. Ignore half a dozen posters and 80% of the thread disappears and nothing of value is missed. Just endless repetitive posts hammering talking points about fact x that isn't true and conclusion y that has no basis in science because of unrelated context z. It's nearly all misinformation, but the reviews are good and there are some useful discussion at times.

Your speculation is niether unbiased nor educated so you fit right in with your description of the forum mate. This is your home now.
 
No need for facts? Unbiased, educated speculation? Just about any other forum would have a much higher signal to noise than this one. Ignore half a dozen posters and 80% of the thread disappears and nothing of value is missed. Just endless repetitive posts hammering talking points about fact x that isn't true and conclusion y that has no basis in science because of unrelated context z. It's nearly all misinformation, but the reviews are good and there are some useful discussion at times.

It's a rumors thread, so all that's fine, though you may not like it. Are you sure it's not 81.3%, while we are speculating with no basis in science? You pull numbers out of thin air when it suits you.

The topic is Vega, not this forum or its members. You should refrain from attacks of that nature on H and its members or find a place you like to be other than here, imho, if it's not good enough for you. I happen to like this place. I don't have a problem with you, but if you want to group me and other people into your statement and expect no backlash here's a reality check.

Back on topic I'm starting to wonder about the memory on Vega, I was poking around this issue in another thread. Interesting numbers you have there on FE.
 
Because they are priced approximately the same? Makes sense to compare similar market segments.

AMD is directly comparing Vega 64 to the 1080 in their slides and present and past presentations this is not by accident or by design, that is reality

It will also reduce the load on ROPs, TMUs, and cache as it will accelerate culling in some cases. Still need more documentation on what steps are being performed.

No it doesn't reduce load on specific components, it reduced bandwidth two different things man. Cache with DSBR will improve pixel culling, but that varies from app to app, and without any real information from AMD other then a variable chart based on apps how did you even consider putting numbers to that? Generalizations based on what nV is doing? Even if you could do that, which you can't, we don't have numbers for so.....

Not sure what expectations I'm setting. Half the fun with AMD is the over the top hype that gets generated. I just crunch the numbers and arrive at a sensible conclusion. The only time that isn't the case involves someone taking the statements completely out of context, often in a manner that makes no sense at all.

Yet no competition for 1080ti but you have imagined it can take on Volta with driver updates? How did your number crunching account for that?
 
Back on topic I'm starting to wonder about the memory on Vega, I was poking around this issue in another thread. Interesting numbers you have there on FE.

0OKcrgW.png


I mean, I don't think it's reasonable to conclude that RTG could have intended Vega to have 20% less raw memory bandwidth than Fiji, right? Even if they were counting on significant bandwidth savings from Primitive Shaders and DSBR, it still wouldn't really make much sense to design a GPU to clock higher than Fiji and give it less memory bandwidth by design, right?

As far as my limited understanding of the subject goes, the testing so far on Vega FE appears to indicate that it is memory bandwidth bound in gaming. In which case, it would seem that the burning question is to what extent having 20% less memory bandwidth than Fiji is a hardware issue that cannot be fixed, or some sort of severe driver issue that could be fixed to some extent? IF this is something that could be addressed in drivers, I don't think it's unreasonable to suggest that a significant increase in memory bandwidth would improve both gaming and ETH mining performance.
 
I mean, I don't think it's reasonable to conclude that RTG could have intended Vega to have 20% less raw memory bandwidth than Fiji, right?

Agree

Even if they were counting on significant bandwidth savings from Primitive Shaders and DSBR, it still wouldn't really make much sense to design a GPU to clock higher than Fiji and give it less memory bandwidth by design, right?

Someone correct me if I'm wrong but I don't think previous designs were memory (bandwidth or latency) bottlenecked, so it would, but I think the expectation was faster memory, and I'm wondering how that expectation might have changed things or if it was met.

As far as my limited understanding of the subject goes, the testing so far on Vega FE appears to indicate that it is memory bandwidth bound in gaming. In which case, it would seem that the burning question is to what extent having 20% less memory bandwidth than Fiji is a hardware issue that cannot be fixed, or some sort of severe driver issue that could be fixed to some extent? IF this is something that could be addressed in drivers, I don't think it's unreasonable to suggest that a significant increase in memory bandwidth would improve both gaming and ETH mining performance.

Well, Raja said:

Both Fiji's and Vega's HBM(2) implementations offer plenty of bandwidth for all workloads.
Consumer RX will be much better optimized for all the top gaming titles and flavors of RX Vega will actually be faster than Frontier version!


So that pretty much denies that, and sort of confirms that the increase in performance has to come driver side?

Don Woligroski said: Vega performance compared to the Geforce GTX 1080 Ti and the Titan Xp looks really nice.

I guess it depends on what you think really nice is if we compare slides to what he's saying? Faster than FE, but by how much? Really nice for the price would be my guess and that's between 1080 and Ti trading spots in various games.

The only reason I could think that RTG would hold back is so that they don't give Nvidia a performance target before launch. You saw how Nvidia reacted to FE. I've got to say its rich ... and dubious, if memory serves, that RTG would have to rely on its drivers to put their product over the mountain.
 
0OKcrgW.png


I mean, I don't think it's reasonable to conclude that RTG could have intended Vega to have 20% less raw memory bandwidth than Fiji, right? Even if they were counting on significant bandwidth savings from Primitive Shaders and DSBR, it still wouldn't really make much sense to design a GPU to clock higher than Fiji and give it less memory bandwidth by design, right?

As far as my limited understanding of the subject goes, the testing so far on Vega FE appears to indicate that it is memory bandwidth bound in gaming. In which case, it would seem that the burning question is to what extent having 20% less memory bandwidth than Fiji is a hardware issue that cannot be fixed, or some sort of severe driver issue that could be fixed to some extent? IF this is something that could be addressed in drivers, I don't think it's unreasonable to suggest that a significant increase in memory bandwidth would improve both gaming and ETH mining performance.


Well we don't know what is going on have to see once the card gets fully reviews or I can get my hands on one and do some testing :)
 
the whole point is to keep data in the cache, I don't see how it will reduce the load. If rasterization was a bottleneck then lifting it will increase ROP load if anything.
Keep data cached and rearrange (bin) triangles to facilitate better culling. While backface culling is straightforward, getting everything behind the wall you are staring at can be equally effective. GN has an interview with Mike Mantor about primitive discard. The binning would be an extension of that with some areas of improvement mentioned.

But not hardware accelerated primitive discard, tiled rasterization, multiprojection. You went off on an ignorant tirade against 'tensor cores' on B3D in such haste that you forgot to even read what operation they were performing. Kept droning on about tensor products for the love of god.
What ignorant tirade? There was no tirade I recall, but you were posting a lot about a point that was ultimately incorrect. I was just pointing out what function was being performed. As for ignorance, I was correct so not sure how that applies. All I did was point out that tensor cores are just regular SIMDs with wave level instructions.

What exactly is it you want me to explain about this thread? I don't follow. NV has better perf/w, supports async. Yes. And? Why are benchmarks in DX12 not valid ? I don't understand, and how can NV not support intrinsics ? They are intrinsic to the architecture, please inform yourself about what shader intrinsics are before making statements like these because I cannot take you seriously at this rate lol.

Nvidia doesnt support intrinsics, Intel doesnt support their own ISA. lol.
I'm not sure why they are invalid. Just that everyone says that because Nvidia normally loses performance. Especially with async enabled, but at least it's supported I guess.

As for the intriniscs in question, they are functions widely used on console and shortly on PC with SM6.x coming soon. Should have been added years ago, but these current instructions render comparisons on Doom invalid so I hear.

Your speculation is niether unbiased nor educated so you fit right in with your description of the forum mate. This is your home now.
Where have I shown bias? I compare and contrast fairly evenly, even if you don't appreciate the results of those comparisons.

As for educated, my conclusions are based on engineering experience and time studying architectures with a rather good track record for speculating.

It's a rumors thread, so all that's fine, though you may not like it. Are you sure it's not 81.3%, while we are speculating with no basis in science? You pull numbers out of thin air when it suits you.
Rough estimate based on scroll bar size and less than half a dozen posts per page.

The topic is Vega, not this forum or its members. You should refrain from attacks of that nature on H and its members or find a place you like to be other than here, imho, if it's not good enough for you.
I have attacked nobody, as you appear to be doing here. I've only pointed out a significant number of posters seem to mirror Nvidia's talking points across forums, post pro Nvidia/anti-AMD comments, and drown out discussions with posts of very little substance.
That's not an attack but a simple observation. Ignore works quite well for the signal to noise as I've mentioned before.
 
Keep data cached and rearrange (bin) triangles to facilitate better culling. While backface culling is straightforward, getting everything behind the wall you are staring at can be equally effective. GN has an interview with Mike Mantor about primitive discard. The binning would be an extension of that with some areas of improvement mentioned.

And it is enabled in the performance numbers presented by AMD a few days ago.
What ignorant tirade? There was no tirade I recall, but you were posting a lot about a point that was ultimately incorrect. I was just pointing out what function was being performed. As for ignorance, I was correct so not sure how that applies. All I did was point out that tensor cores are just regular SIMDs with wave level instructions.


No, what you did was go off an long tirade about how tensor cores are cursory addition of little relevance and that can be easily implemented on current GCN hardware, moreover you claimed that it is performing tensor products, which indicates to me that you hadn't even bothered to read what little information NV had published about it. My favorite kind of poster.
I'm not sure why they are invalid. Just that everyone says that because Nvidia normally loses performance. Especially with async enabled, but at least it's supported I guess.

The only true statement in this paragraph is that you guess.

Nobody cares, none of us every argued that maxwell would gain performance from async, only that it is perfectly capable of it, which holds true and was proven true with the release of Pascal. the monumental gains you claimed Fiji would have to bring it on par with Titan X never happened though.

This is all tangential to the subject though, the subject being your wildly enthusiastic predictions.
As for the intriniscs in question, they are functions widely used on console and shortly on PC with SM6.x coming soon. Should have been added years ago, but these current instructions render comparisons on Doom invalid so I hear.

GCN intrinsics are widely used on consoles, because consoles use GCN GPUs. Your point?
Where have I shown bias? I compare and contrast fairly evenly, even if you don't appreciate the results of those comparisons.

This is like when someone comes home to their dog having ravaged the furniture and the dog just acts like nothing happened. You have shown bias every chance you got, the rx480 never encroached on 1070 territory, Fiji never performed as you claimed it would...

Whenever anyone contests your wild claims you smugly accuse them of being biased or ignorant, but you cannot tolerate someone doing that to you.
As for educated, my conclusions are based on engineering experience and time studying architectures with a rather good track record for speculating.

My conclusions are based on having had the misfortune of reading large number of your posts across H and B3D. My credentials are solid and verifiable.
 
I have attacked nobody, as you appear to be doing here. I've only pointed out a significant number of posters seem to mirror Nvidia's talking points across forums, post pro Nvidia/anti-AMD comments, and drown out discussions with posts of very little substance.
That's not an attack but a simple observation. Ignore works quite well for the signal to noise as I've mentioned before.

Go back and read what you typed, it's different than what you're saying now. You're not under attack, you just got a response to what you said, and now you're trying to change it up. But if you didn't mean it that way, cool. If you did or you have a problem feel free to ignore me or everyone else, or just go somewhere else. I'm not the one complaining about the community or the thread I'm posting in. That's you, and only you can change it.

I'd like to see AMD/RTG compete. At this point though they seem solid enough with deals from consoles and Macs. I was entertaining the idea of buying a Vega card but I'm not convinced I'd rather have a 64 than a Ti.
 
Primitive Discarding in Vega: Mike Mantor Interview. Vega 10 can reach 1.7GHz and what I got out of the interview was that it will do so quite frequently. Skip to 8:30 to see that part of the conversation.



View attachment 32499



Ok AMD just clearly stated primitive discard is at Polaris levels, to get any more primitive shaders MUST be used. So my initial assumptions about AMD triangle throughput performance over a year ago is now 100% validated by AMD. There is absolutely no way AMD can match up against Pascal without extra work being done by developers with this alone.

Anarchist think you might want to take your numbers and throw them out. If you are still going across the lines of improved throughput on Vega.

PS this has been the biggest problem of GCN and why its shader array isn't fully unitized. This explains a lot why Vega isn't scaling well.
 
Last edited:
Potential performance comparison. Korean so translator needed.
http://drmola.com/bbs_free/221888

Translator fu...

The variables of the main control group are as follows.

- Radeon RX Vega 56: 3584 core, base clock 1156MHz, boost clock 1471MHz, memory clock 1600MHz

- Radeon RX Vega 64 air cooling: 4096 cores, base clock 1247MHz, boost clock 1546MHz, memory clock 1890MHz

- Radeon RX Vega 64 water cooling: 4096 core, base clock 1406MHz, boost clock 1677MHz, memory clock 1890MHz


At this point, we can not know exactly what the Radeon RX Vega Series throttling is, so the values calculated with the base clock and the boost clock are displayed as the lower and upper limits, respectively. According to this base clock, RX Bega 64 is between GTX 1070 and 1080 in both air-cooling and water-cooling, and RX Bega 64 water cooling on the boost clock basis slightly overtakes the GTX 1080 at all resolutions.

On the other hand, the RX Bega 56 is expected to outperform the GTX 1070 by about 1-4% p in resolution except UHD when it is based on base clock, but it is expected to be 8-12% p ahead of GTX 1070 in all other conditions. Especially on the basis of the boost clock, the gap with the GTX 1080 is maintained at around 8% p regardless of the resolution, which is a bit closer to the 1080 than the GTX 1070. But it's hard to expect the reference design to keep up with the boost clock, and that's what Birefur is all about.
 
Ok AMD just clearly stated primitive discard is at Polaris levels
Wasn't this type of primitive discard not used on Fiji at all though? Shouldn't that mean that Vega should have greater effective texture bandwidth than Fiji, since Fiji didn't do this at all, rather than having way less effective texture bandwidth as the initial picture I posted of Vega FE showed? Why is it that every detail that trickles out about Vega seems to make the picture more confusing rather than less?
 
Wasn't this type of primitive discard not used on Fiji at all though? Shouldn't that mean that Vega should have greater effective texture bandwidth than Fiji, since Fiji didn't do this at all, rather than having way less effective texture bandwidth as the initial picture I posted of Vega FE showed? Why is it that every detail that trickles out about Vega seems to make the picture more confusing rather than less?

No it wasn't, it was in Polaris tough.

Texture bandwidth shouldn't be affected by this B3D test as it shouldn't be stressing the geometry pipeline at all. This is actually prior to any texturing or pixel rendering happens. This is actually all done with vertex setup, which is the first stage of the pipeline. It can eat up bandwidth though so less available for texturing. For that something else is going on.
 
With the pump design looking to be better this time, I personally think the watercooled model at 300W is probably the way to go if looking for the efficient Vega model.
Primitive Discarding in Vega: Mike Mantor Interview. Vega 10 can reach 1.7GHz and what I got out of the interview was that it will do so quite frequently. Skip to 8:30 to see that part of the conversation.



View attachment 32499

I love how he carefully omits that as implemented by AMD it requires 350W to do 1600MHz as shown by PCPer with accurate scope measurements that correlates to Tom's when they both do 300W mode, so wonder the TBP/TDP for sustained 1700Mhz.
That said I do love how AMD give more freedom than Nvidia to do what you want to the GPU.
Cheers
 
Ok AMD just clearly stated primitive discard is at Polaris levels, to get any more primitive shaders MUST be used. So my initial assumptions about AMD triangle throughput performance over a year ago is now 100% validated by AMD. There is absolutely no way AMD can match up against Pascal without extra work being done by developers with this alone.

Anarchist think you might want to take your numbers and throw them out. If you are still going across the lines of improved throughput on Vega.

PS this has been the biggest problem of GCN and why its shader array isn't fully unitized. This explains a lot why Vega isn't scaling well.
Finally, several of us have been saying this for awhile with you.
Cheers
 
No it wasn't, it was in Polaris tough.

Texture bandwidth shouldn't be affected by this B3D test as it shouldn't be stressing the geometry pipeline at all. This is actually prior to any texturing or pixel rendering happens. This is actually all done with vertex setup, which is the first stage of the pipeline. It can eat up bandwidth though so less available for texturing. For that something else is going on.

To clarify, since I'm the first to admit that I'm pretty new to this stuff and am mostly well out of my depth, you are saying that primitive discard wouldn't affect effective texture bandwidth? Would it affect texture fill rate? Because that also appears to have regressed clock-for-clock from Fiji:

kKoER5H.png
 
To clarify, since I'm the first to admit that I'm pretty new to this stuff and am mostly well out of my depth, you are saying that primitive discard wouldn't affect effective texture bandwidth? Would it affect texture fill rate? Because that also appears to have regressed clock-for-clock from Fiji:

kKoER5H.png


In that specific test no. Lets say its a heavy vertex bound test like a tessellation test, where so many vertices are created they spill over from the cache to the memory, then it will affect the available bandwidth for texturing.
 
Finally, several of us have been saying this for awhile with you.
Cheers


Yeah this was exactly why Scott didn't really answer the question about it when asked directly, there was nothing to answer lol.

And why they showed FP 16 with primitive shaders with the hair demo on stage. They need primitive shaders to do what nV is doing by default with geometry.
 
To clarify, since I'm the first to admit that I'm pretty new to this stuff and am mostly well out of my depth, you are saying that primitive discard wouldn't affect effective texture bandwidth? Would it affect texture fill rate? Because that also appears to have regressed clock-for-clock from Fiji:

kKoER5H.png

There was a write up somewhere about Vega. It indicated that AMD had to add a fair bit if logic to cover for the latency in having a longer pipeline, to allow for higher clocks.
 
Also worth remembering the Fury X was using 4 stacks and clearly at full spec.
Vega is using 2 stacks and questionable if achieving actual higher spec (not directed at AMD but SK Hynix where they seem to be getting rather more optimistic results than Samsung).
But consumer Vega may be better in this regard as it is not 8-Hi HBM2 stacks.

Edit:
Just remembered has it been confirmed whether AMD is using 8-Hi from SK Hynix or Samsung because an earlier article in Anandtech says to date only Samsung has announced a working part, and if it is Samsung we know they clock lower than the official spec AMD is quoting for Vega in general, albeit this would be more applicable to 8-Hi and meaning what we have seen to date for Vega FE.
Cheers
 
Last edited:
Seems Buildzoid is recommending best route if interested in Vega is watercooled and ignore air variants (unless putting your own waterblock on them), over last few weeks seeing some of the actual spec behaviour starting to feel the same myself.
But I think one area that may be of interest is whether AMD binned the Vega water cooled edition and so possibly a better route for some than air variant modded with say EK waterblock, fingers cross we get enough review information to weigh either option.
I guess the opinion may change when it comes to custom AIB air cooling, but I think the watercooled model will actually be a bit more efficient and with greater flexibility in the Boost mechanism behaviour due to thermals.



Cheers
 
Last edited:
Well, I was planning on the 56, and on the fence for water cooling, so we will see...
 
Well, I was planning on the 56, and on the fence for water cooling, so we will see...
Yeah part of a decision towards a 56 and adding watercooling is whether it can also be successfully flashed to behave more like the 64.
One cannot assume this will be possible like in the past so fingers crossed this is looked at pretty quickly by enthusiasts to see if possible.
Cheers
 
I Remember speculating that Vega would be a six SE design with 4608 shaders, sadly didn't come true
Yeah one reason why Nvidia is now catching up in compute, they can still scale their architecture.
Cheers
 
Seems Buildzoid is recommending best route if interested in Vega is watercooled and ignore air variants (unless putting your own waterblock on them), over last few weeks seeing some of the actual spec behaviour starting to feel the same myself.
But I think one area that may be of interest is whether AMD binned the Vega water cooled edition and so possibly a better route for some than air variant modded with say EK waterblock, fingers cross we get enough review information to weigh either option.
I guess the opinion may change when it comes to custom AIB air cooling, but I think the watercooled model will actually be a bit more efficient and with greater flexibility in the Boost mechanism behaviour due to thermals.



Cheers


Interesting to note buildzoid registered significantly lower than stock power draw while at 1800 mhz LN2, suggests leakage is a major issue for Vega, I wouldn't have been too surprised to find it drawing same power as stock but 100W less as he claimed is just nuts
 
Yeah part of a decision towards a 56 and adding watercooling is whether it can also be successfully flashed to behave more like the 64.
One cannot assume this will be possible like in the past.
Cheers

Oh, I heard the part about the doubt of BIOS flashing availability...

But water cooling makes sense for reducing heat & noise (as compared to the stock reference blower cooler), and a Ryzen R7 1700 / RX Vega 56 water cooled Ncase M1 seems the right way to go...
 
Oh, I heard the part about the doubt of BIOS flashing availability...

But water cooling makes sense for reducing heat & noise (as compared to the stock reference blower cooler), and a Ryzen R7 1700 / RX Vega 56 water cooled Ncase M1 seems the right way to go...
Just consider though if it cannot be flashed then the lower TDP/TBP regulated control will pretty much hurt the clocks achievable, although you would gain a little bit of flexibility back by undervolting but you would still be unfortunately heavily restricted.
But yeah I agree again waterblock makes sense as it will more than likely improve efficiency and Boost to some extent.
Fingers crossed enthusiasts try flashing quite early on to see if it is possible.
If on Ryzen I would also be more inclined towards Vega for now but only if it is getting 1080 level of performance (this can be debatable because it depends whether FE or custom AIB 1080s comparable and ones criteria) and price is competitive.
This late in the day not sure I would want to buy a 1070 or an AMD equivalent performance card unless both are very competitive priced, but that is me.
Cheers
 
Last edited:
Seems Buildzoid is recommending best route if interested in Vega is watercooled and ignore air variants (unless putting your own waterblock on them), over last few weeks seeing some of the actual spec behaviour starting to feel the same myself.
But I think one area that may be of interest is whether AMD binned the Vega water cooled edition and so possibly a better route for some than air variant modded with say EK waterblock, fingers cross we get enough review information to weigh either option.
I guess the opinion may change when it comes to custom AIB air cooling, but I think the watercooled model will actually be a bit more efficient and with greater flexibility in the Boost mechanism behaviour due to thermals.



Cheers

I can't stand watching that guy.
 
Just consider though if it cannot be flashed then the lower TDP/TBP regulated control will pretty much hurt the clocks achievable, although you would gain a little bit of flexibility back by undervolting but you would still be unfortunately heavily restricted.
But yeah I agree again waterblock makes sense as it will more than likely improve efficiency and Boost to some extent.
Fingers crossed enthusiasts try flashing quite early on to see if it is possible.
If on Ryzen I would also be more inclined towards Vega for now but only if it is getting 1080 level of performance (this can be debatable because it depends whether FE or custom AIB 1080s comparable and ones criteria) and price is competitive.
This late in the day not sure I would want to buy a 1070 or an AMD equivalent performance card unless both are very competitive priced, but that is me.
Cheers

I am WAY overdue for a new computer, which I plan on building in the next few months...

I want Ryzen for the 8c/16t boost to rendering & do not want to wait for Volta; so for me Ryzen / RX Vega makes sense...

And I am pretty sure Ryzen R7 1700 / RX Vega 56 will blow away my i5 750 / GTX 650Ti in Maya...!
 
True story or cool story starting here in this video?




...

I Remember speculating that Vega would be a six SE design with 4608 shaders, sadly didn't come true

Would it have made a difference knowing what you know now?
 
True story or cool story starting here in this video?




...



Would it have made a difference knowing what you know now?


Well they would require an awkward ROP/MC configuration to end up with 64 ROPs/ 2048b bus divided between six GPCs so that wouldn't really make sense but assuming that is a non-issue and they would have gone for 96/3072 it would provide 50% higher geometry throughput and a paltry ~15% increase in shader throughput thus shifting the balance significantly towards geometry, something that appears to be a major bottleneck for Vega
 
Back
Top