Vega Rumors

you guys really should just stay on topic, I just got a 6 card AMD rx 580 rig setup just to see what mining is all about. I'll put up pics when it comes in ok? With a little post it note that says Razor1 says hi ok? And if it turns out good (not too time consuming), I will probably upgrade it to Vega's when Vega comes out. Also get one extra on the side. Sad isn't it, I'm more excited about Vega for mining than anything else.

I don't even give a shit about mining at the moment but I'll do it just to see what its all about. I buy things that are the best for what they are for, not for namesake, not for fanboyism, the people that use that card (pun intended), screw up, don't give a shit, they don't have the capability to talk about anything else, So as I have been doing, and will continue to do just report them. Mmmmmkay?

Sorry to hear your account got hacked.

P.s. try undervolting and comparing it to 1060 undervolted and see how you do ;)
 
Sorry to hear your account got hacked.

P.s. try undervolting and comparing it to 1060 undervolted and see how you do ;)


LOL they wish it was so, I know.

oh I got all the info :) going to undervolt around 100 milli volts and keep the frequency around 1300 mhz up the mem frequency to 2000, should give me a hash rate at around 180Mhashs for the system, at around 130 watts per card. Should net around 11k per year for dual mining Eth and Pasc, if that all works out, I'm going to have my cousin in India set up a farm of these things lol, maybe, it all depends though have to think about all the logistics about a warehouse, power stability and what not.
 
LOL they wish it was so, I know.

oh I got all the info :) going to undervolt around 100 milli amps and keep the frequency around 1300 mhz, should give me a hash rate at around 180Mhashs for the system, at around 130 watts per card. Should net around 11k per year for dual mining Eth and Pasc, if that all works out, I'm going to have my cousin in India set up a farm of these things lol, maybe, it all depends though have to think about all the logistics about a warehouse, power stability and what not.

Nice work and looks like you done your calcs. Beware of Vega displacing the 580s but they should still be pretty efficient either way. I've done a bit of mining here and there over last 5-6 years but never gone balls deep like you yet. Middle of winter here though and I'm having second thoughts with the 'bankers' discovering eth.. rather have a heater that can mine ;)
Interested also if you have a 1060 around to compare power/performance in gaming.

India eh, I'm building business ties there in Delhi/Bangalore currently, not a bad idea you got there.. just beware of import taxes they are major ass fuckers if you're not white.
 
how is it done at 250w? Common that is still miles better than polaris. RX 580 is using 220w at 1400+ if rx vega doubles that performance and gives it to you at 250w its bad? Yes its bad only compared to 1080ti around 50-60 watts more likely on average. But still all of sudden we expected them to catch nvidia in power and performance? I love it when people say card it dead if it uses 250w. that is not too bad at high end.

You have to bear in mind that ever since the Fermi days, when the gtx 480 caused the Sun to break out into a sweat while the Moon's ears bled, nV fanbois have had a particular bee in their collective bonnets regarding power, heat and noise.
 
Nice work and looks like you done your calcs. Beware of Vega displacing the 580s but they should still be pretty efficient either way. I've done a bit of mining here and there over last 5-6 years but never gone balls deep like you yet. Middle of winter here though and I'm having second thoughts with the 'bankers' discovering eth.. rather have a heater that can mine ;)
Interested also if you have a 1060 around to compare power/performance in gaming.

India eh, I'm building business ties there in Delhi/Bangalore currently, not a bad idea you got there.. just beware of import taxes they are major ass fuckers if you're not white.


lol tell me about, Vega, the up front costs don't bother me (as long as its not more than 2.5 the costs of rx 580 its worth it, cause they will come back and then some, its the perf/watt that is what I'm looking at. and how many cards I can put into one motherboard based on the biggest power supply I can get with a platinum rating.

I hear ya about the taxes, what import taxes are like 100% or something lol. Going to have to have my cousin buy it there and then send him the money.
 
Last edited:
Here's the thing, they live demo games, it's the best approach. If they live demo a synthetic, like the RPM TressFX, folks will just say "but who cares, it's a synthetic, baked or canned and all that"...
Well against that AMD did demo all other functions against real world games or professional software, so the point is still valid about why they say it needs trivial dev/programming but failed to show it active and without; as I said they also did real world with Async Compute and also SSG and HBCC at 2GB.
So if AMD goes out of their way for some and not others one need to ask why, especially as they state it increases notably front end performance and perspective of some is that it is a trivial aspect to implement - yeah comes down to interpretation of what Raja and others have said or in presentations.
Cheers
 
lol tell me about, Vega, the up front costs don't bother me (as long as its not more than 2.5 the costs of rx 580 its worth it, cause they will come back and then some, its the perf/watt that is what I'm looking at. and how many cards I can put into one motherboard based on the biggest power supply I can get with a platinum rating.

I hear ya about the taxes, what import taxes are like 100% or something lol. Going to have to have my cousin buy it there and then send him the money.

Some go external PSU and run them on a big shared rail at your size of rig, I'd recommend that vs loading up a normal one. The 8 and 6 pins are only 12V IIRC.
E.g. a dedicated 12V PSU is more efficient than a computer PSU at part load etc. Most modern SMPS are most efficient around max utilisation too which is tricky when you're looking at 5V, 3.3V and 12V in one box.

Have also heard re:taxes its easy to bring things in as a westerner, but hard to bring them out. Good luck, I'll be keeping an eye on your progress :)
 
It doesn't need to beat the 1080Ti, it just needs to be in the ballpark to allow them to price it high for margins.

Let's not get ahead of ourselves and expect miracles. GP102 is pretty big and close to Vega 10 in size, AMD hasn't beat NVIDIA in perf/mm2 since Maxwell. At best, it will trade blows.
Then they would have to price it lower. How much would that help margins?
 
Some go external PSU and run them on a big shared rail at your size of rig, I'd recommend that vs loading up a normal one. The 8 and 6 pins are only 12V IIRC.
E.g. a dedicated 12V PSU is more efficient than a computer PSU at part load etc. Most modern SMPS are most efficient around max utilisation too which is tricky when you're looking at 5V, 3.3V and 12V in one box.

Have also heard re:taxes its easy to bring things in as a westerner, but hard to bring them out. Good luck, I'll be keeping an eye on your progress :)


interesting, will PM you about that if everything works out, not sure exactly how to do that. :)
 
Where I come from, English being the national language, when someone says "You shouldn't need to do anything special to take advantage of these improvements", it means carry on as usual and you'll still get these improvements.

But you're free to interpret as you feel. If it makes you happier with a different interpretation, go for it.
Carry on as usual for a developer means: "yeah, we will add some extensions to the API and you will have to use them". Simple as that.
 
Ironically the slide deck of Tress FX 4.0 even states they are using vertex shaders, slide 5. There we have it, its using the geometry pipeline. Not compute only. They replaced the GS stage with compute shaders. Which nV hasn't done with hairworks cause they don't need to worry about bottleneck since they just have more GS units.
 
Ironically the slide deck of Tress FX 4.0 even states they are using vertex shaders, slide 5. There we have it, its using the geometry pipeline. Not compute only. They replaced the GS stage with compute shaders. Which nV hasn't done with hairworks cause they don't need to worry about bottleneck since they just have more GS units.

This is what I said, TressFX does not go through the front-end Geometry wise, it halts after Vertex Shaders, no other steps (no Tessellation at all) follow through in the fixed function pipeline. It becomes a FP16 compute job after vertices data are stored.

And if you're concerned with Vertex Shaders limiting the performance of that RPM FP16 demo, it's obviously not bottlenecked since they get perfect 2x the performance. You can speculate why it's not bottlenecked and it's actually obvious if you think about it.

Either AMD buffed the fixed pipeline OR the Vertex Shader is capable of using all of the ALUs to scale performance. Something current GCN front-end cannot do. If we go by what AMD said, making their Geometry Engine "programmable" (something current GCN lacks), then it's clear that it can leverage the Stream Processors to scale performance.

The conflict appears to be, whether you think this requires specific developer coding or whether AMD makes it automatic via their drivers.
 
This is what I said, TressFX does not go through the front-end Geometry wise, it halts after Vertex Shaders, no other steps (no Tessellation at all) follow through in the fixed function pipeline. It becomes a FP16 compute job after vertices data are stored.

And if you're concerned with Vertex Shaders limiting the performance of that RPM FP16 demo, it's obviously not bottlenecked since they get perfect 2x the performance. You can speculate why it's not bottlenecked and it's actually obvious if you think about it.

Either AMD buffed the fixed pipeline OR the Vertex Shader is capable of using all of the ALUs to scale performance. Something current GCN front-end cannot do. If we go by what AMD said, making their Geometry Engine "programmable" (something current GCN lacks), then it's clear that it can leverage the Stream Processors to scale performance.

The conflict appears to be, whether you think this requires specific developer coding or whether AMD makes it automatic via their drivers.


It does go through the front end lol, you can't do it without that, Vertex shader is the first part of the pipeline man.

if using vertex shaders there is NO way to avoid the front end. The GS is the final stage prior to rasterization.

They never fixed the issue, the Front End is everything prior to the rasterizer. And the GS is where the problems lay for AMD cause they just don't have enough units. The issue is just that, not enough GS units. They avoided the issue by circumventing the GS. Yes its very obvious if you think about it, no GS no problem for AMD cards, but that needs to be replaced with the primitive shader which is just another adoption of the unified shader schema.

Think of its as two blocks, front end which constitutes all vertex needs (everything up to the GS is front end), back end all pixel needs (everything from the rasterizer to the pixel shaders to the ROP's is all back end).

Also you don't seem to understand what Unified shaders are all about, the ALU's function either as a pixel shader, vertex shader or compute shader, there is no dedicated vertex shader units anymore, in the past there were prior to the g80 and r600. Where those cards had a vertex pipeline and a pixel pipeline.
 
Last edited:
You guys talk about unified shaders as if AMD's geometry processing scaled based on ALU/Stream Processors. It did not, never had.

If Vega fixes this, then it's 4096 SPs will contribute massively to geometry performance, and that is ALL that matters here.
 
You guys talk about unified shaders as if AMD's geometry processing scaled based on ALU/Stream Processors. It did not, never had.

If Vega fixes this, then it's 4096 SPs will contribute massively to geometry performance, and that is ALL that matters here.


no it does matter, that what you are not understanding, if the developers don't use primitive shaders and stick with the traditional API, because that is what DX is using and don't adopt or adoption is minimal for the new extensions via Vulkan or a new version of DX12 comes out much later with cap extension for primitive shaders and RPM, the problem is still there. The problem with this is Xbox doesn't utilize primitive shaders or RPM as it doesn't have them, and MS is trying to unify Xbox and PC development environments. so a new DX version with caps is unlikely to happen anytime soon. AMD then will have to create an SDK specific for Vega's features.

Adoption of new features is only viable as long as they are used, and at this point I don't think that is in MS's direction to have another version of DX specific for the pc, so AMD is left with Vulkan and Vulkan only and its adoption rate has been slow.

Pretty much game developers will have to look at it as an accessory feature, not a must have since console development drivers PC games. In the long run yeah they will come out as usable, but lets think about 2 or 3 years down the road. Once the next consoles are on the horizon.

PS Most likely it can't be done through shader replacement either. Cause the pipeline for primitive shaders would likely be different that with the GS. But I'm willing to wait on that for more info on the new pipeline stage.
 
Last edited:
no it does matter, that what you are not understanding, if the developers don't use primitive shaders and stick with the traditional API, because that is what DX is using and don't adopt or adoption is minimal for the new extensions via Vulkan or a new version of DX12 comes out much later with cap extension for primitive shaders and RPM, the problem is still there. The problem with this is Xbox doesn't utilize primitive shaders or RPM as it doesn't have them, and MS is trying to unify Xbox and PC development environments. so a new DX version with caps is unlikely to happen anytime soon. AMD then will have to create an SDK specific for Vega's features.

Adoption of new features is only viable as long as they are used, and at this point I don't think that is in MS's direction to have another version of DX specific for the pc, so AMD is left with Vulkan and Vulkan only and its adoption rate has been slow.

Pretty much game developers will have to look at it as an accessory feature, not a must have since console development drivers PC games. In the long run yeah they will come out as usable, but lets think about 2 or 3 years down the road. Once the next consoles are on the horizon.

PS Most likely it can't be done through shader replacement either. Cause the pipeline for primitive shaders would likely be different that with the GS. But I'm willing to wait on that for more info on the new pipeline stage.

how are you so definitely sure that AMD absolutely can not program primitive shaders via drivers or rasterizer function has to be programmed by developers as well? You absolutely can not say either unless you know for sure and you know for a fact with definitive proof to back it up. I will reserve this until I see it in action and until after Rx series of Vega is out.
 
how are you so definitely sure that AMD absolutely can not program primitive shaders via drivers or rasterizer function has to be programmed by developers as well? You absolutely can not say either unless you know for sure and you know for a fact with definitive proof to back it up. I will reserve this until I see it in action and until after Rx series of Vega is out.

This is what I find so odd with these guys, they claim with such certainty when in fact they have no insider info about Vega and AMD's utilization of it's HW/drivers.

How can they rule out the major possibility of Vega's Geometry Engines was designed in a way that enables AMD to activate via their drivers, Primitive Shaders by default, that replaces previously multi-step shaders and vastly boost geometry performance, scaling with their ALU counts?
 
This is what I find so odd with these guys, they claim with such certainty when in fact they have no insider info about Vega and AMD's utilization of it's HW/drivers.

How can they rule out the major possibility of Vega's Geometry Engines was designed in a way that enables AMD to activate via their drivers, Primitive Shaders by default, that replaces previously multi-step shaders and vastly boost geometry performance, scaling with their ALU counts?

You point the finger at people when you demonstrate you are missing basic understanding of the pipeline?

Seriously?!
 
You point the finger at people when you demonstrate you are missing basic understanding of the pipeline?

Seriously?!

There is no misunderstanding. I said it right away from the start that the TressFX RPM FP16 demo bypasses GCN geometry limitation, it doesn't even use Geometry Shaders. Razor claimed it uses Tessellation, I showed proof it does not, not the recent versions. It has moved to Compute based awhile now.

So excuse yourself out of the discussion.
 
They seem to have the assumption that fixed functions can't be programmed. The more flexible programmable method not being able to perform even the most basic mathematical task. The concept is extremely simple, use a default shader or a programmer can substitute their own. All without requiring fixed function hardware as Nvidia has been doing for years. Fixed is more efficient, but not as flexible.
 
They seem to have the assumption that fixed functions can't be programmed. The more flexible programmable method not being able to perform even the most basic mathematical task. The concept is extremely simple, use a default shader or a programmer can substitute their own. All without requiring fixed function hardware as Nvidia has been doing for years. Fixed is more efficient, but not as flexible.

This is the thing, NV has been doing this for years, yet AMD claims they will do it for Vega and suddenly all these doubters show up and say AMD can't get their geometry performance to scale with ALUs?

I have to bring up the naysayers about clock speeds. There's been a lot of remarks for the past 6 months about how there's no way for AMD to clock Vega as well as Pascal, and just look at the 1600mhz Vega FE, in the ballpark of GP102 stock clocks, no?
 
This is the thing, NV has been doing this for years, yet AMD claims they will do it for Vega and suddenly all these doubters show up and say AMD can't get their geometry performance to scale with ALUs?

I have to bring up the naysayers about clock speeds. There's been a lot of remarks for the past 6 months about how there's no way for AMD to clock Vega as well as Pascal, and just look at the 1600mhz Vega FE, in the ballpark of GP102 stock clocks, no?

But at the same time about clock speeds, We have no idea if that is near the Maximum clock speed it can reach. Do not forget AMD said Fury-X was an overclockers dream.....Yea we all know how that turned out.
 
I have to bring up the naysayers about clock speeds. There's been a lot of remarks for the past 6 months about how there's no way for AMD to clock Vega as well as Pascal, and just look at the 1600mhz Vega FE, in the ballpark of GP102 stock clocks, no?
The clocks aren't really comparable. Instructions and logic gates only have so many possibilities. Just look at a CPU with the pipelined operation. No GPU is close to the 4GHz of most CPUs, but it's possible! CPUs target higher IPC and AMD listed higher IPC in addition to packed math. Get rid of the FMA in favor of separate multiply and add instructions that clockspeed goes up substantially at the cost of throughput. That downside can be designed around.
 
Lisa Su seems to be tempering expectations (again) potentially putting the RX Vega launch "a couple of months" out.

https://www.overclock3d.net/news/gp...t_the_rx_vega_being_2_months_away_are_false/1

Although she wasn't specific about the markets to which she was referring.

"Frontier Edition will ship with 16GB of HBM2 towards the latter half of June. You will see the enthusiast gaming platform, the machine learning platform and the professional graphics platform very soon thereafter. And so, we will be launching Vega across all the market segments over the next couple of months."
 
It has to use the vertex shader man, that doesn't mean its pure compute lol, it uses the hull shader to create the vertices.

There is no way to access the hull shader without using direct compute.....

Geez, even without adaptive tessellation there and pure tessellation there is no way to access the hull shader without a shader component.

I'm not talking about the GS to do tessellation either, GS only has major issues as seen with DX10 (performance) which is what you are getting at. That is not what I'm talking about.

There was a reason why hairworks didn't use adaptive tessellation, cause without it, it would hurt AMD cards. And with Polaris same situation just at x2 the amounts.

The hull shader is fixed function and needs input from the vertex shader to start

this is the pipeline

IC340510.jpg


Now you tell me what did AMD remove, then Geometry shader which is at the end. It still needs those front end issues solved cause they solved only a portion of it by going to compute instead of the GS. Hull sadaer and Domain shaders are fixed function units man.

Sorry had to step away for a moment.

Ok, now the Hull Shader, Tessellation, Domain Shader, are done, this is where then in AMD goes back to the shader array for Tress FX, by passing, the GS, which is its bottleneck. Why is it a bottleneck for AMD? It only has 2 or 4 GS units depending on gen, cause that is what its limited to. We all know that. For nV they have a GS unit per SM, so the bigger the chip the more GS's they have, A LOT more. While AMD hardware with the traditional DX11 and 12 pipeline would get bottlenecked before nV cards, that is why Fiji, Polaris if pushed too hard on the GS front, gets gtx 960 or lower in synthetic tests. Vega if its got 4 units will have the same problem just with factoring in clock speed with the traditional pipeline.

How do they get around that, primitive shaders. RPM can't get around that by itself, cause it just can't RPM is used in the vertex shader portion of the pipeline, before the fixed function components are involved.
Good grief, TressFX does not use Tessellation nor Geometry shader. Vertex shader and compute shader. Looks superior to NVidia crap geometry based POS that makes even the fastest Nvidia cards look like snails on a highway. It is great work by AMD and developers that work well on all cards unlike NVidia HairLessWorks that don't work well on any card.

Not all is well in fairy land, I mean NVidia stuff - yes Nvidia does some amazing stuff but Hairworks is not one of them.
 
Well june is already over a month at the end. She said they will be launching all of vega cards over next couple of months.
To me I read it Vega will be released in the next couple of months. Look very much forward to that.
 
Well june is already over a month at the end. She said they will be launching all of vega cards over next couple of months.

May is 1, June is 2. Couple = 2 = June launch confirmed. lol

ps. It's silly how anyone can be so certain from such a vague statement, with "You will see the enthusiast gaming platform, the machine learning platform and the professional graphics platform very soon thereafter." .. what is definition of very soon???!
 
Good grief, TressFX does not use Tessellation nor Geometry shader. Vertex shader and compute shader. Looks superior to NVidia crap geometry based POS that makes even the fastest Nvidia cards look like snails on a highway. It is great work by AMD and developers that work well on all cards unlike NVidia HairLessWorks that don't work well on any card.

Not all is well in fairy land, I mean NVidia stuff - yes Nvidia does some amazing stuff but Hairworks is not one of them.

I don't wanna bash HairWorks, it relies on geometry & tessellation because NV GPUs are better at it. TressFX relies on Compute because AMD GPUs have more raw TFlops to spare. I just wanted to explain why the demo with RPM 2x FP16 actually works in TressFX because some ppl misunderstood thinking it's geometry & tessellation based when it's not.

Some people said that RPM 2x FP16 is useless for gaming, but with that demo, AMD proved that it can be useful (how useful, obviously up for debate). In the past, some talked about the HBCC being useless for gaming, but with 2-4x effective vram capacity, it can be very useful for cheaper 4GB SKUs, or heck, on their Vega APUs if they get around to using HBM2 for that. 2GB HBM2 Vega APU using it the HBM2 as cache, will be kick ass for notebooks & mITX. There's use cases if people just only had an open mind, instead of trying to shit on AMD at every opportunity.
 
Using ALU resources to perform front-end functions traditionally handled by fixed function hardware is a rather odd solution given the focus on async compute allowing GCN to circumvent that bottleneck by overlapping processing of frames.
 
Good grief, TressFX does not use Tessellation nor Geometry shader. Vertex shader and compute shader. Looks superior to NVidia crap geometry based POS that makes even the fastest Nvidia cards look like snails on a highway. It is great work by AMD and developers that work well on all cards unlike NVidia HairLessWorks that don't work well on any card.

Not all is well in fairy land, I mean NVidia stuff - yes Nvidia does some amazing stuff but Hairworks is not one of them.


err Tress FX uses the hull shader and and tessellate, the compute shader replaces the GS portion of the shader code, The gs is what creates the adaptive portion of the adaptive tessellation ;), now the rest of it is not worth talking about because the topic isn't about tress fx vs hairworks and which looks better.
 
There is no misunderstanding. I said it right away from the start that the TressFX RPM FP16 demo bypasses GCN geometry limitation, it doesn't even use Geometry Shaders. Razor claimed it uses Tessellation, I showed proof it does not, not the recent versions. It has moved to Compute based awhile now.

So excuse yourself out of the discussion.


you stated it didn't use the traditional pipeline, I stated it it, and showed you it did with your linked paper, the only part that was circumvented was the GS which is AMD's problem area, but that doesn't mean their problems are over, if I decided to increase my base polygon counts in games by 2, AMD is screwed. I don't know about you but here
qPMoi6X.jpg


Here is a character I'm working for a next gen game

he is at you know what, 320k polys, that is 4-5 times more than what they are now, I can cut him down to 120k polys (taking out all my edge loops) so that would be 2-3 times more than today's games. You see the problem now?

This is all before tessellation, if I was to put tessellation in there, we are talking about millions of polys for just the character its like 1.3 million for 1 factor, and x4 for each factor extra and this game will recommend a factor of x2 for everything. But there won't be any complaints about over tessellated models, cause I am making sure these models are extremely optimized polycounts.
 
Last edited:
how are you so definitely sure that AMD absolutely can not program primitive shaders via drivers or rasterizer function has to be programmed by developers as well? You absolutely can not say either unless you know for sure and you know for a fact with definitive proof to back it up. I will reserve this until I see it in action and until after Rx series of Vega is out.


Yes it looks that way, shader replacement isn't easy with these types of things,

I know a shit ton more about these things then you NKD, and have shown you many times I am a dev, just look at the post above. I don't make things up, its from experience and what can and can't be done, things like these aren't easy done. Replacing the old pipeline with a newer pipeline when the developer hasn't used the sdk or updated API, will create a major headache since you are no longer just replacing the shader itself, you will need to change the base code that access the shader code. Its not a simple replace x with y, its replace everything that makes x and x with everything that makes y and y.

AMD will need to clean out the plumbing to change the resultant. That is the problem I foresee, but it depends on how the primitive shader pipeline is set up, is it apart from the the traditional pipeline, or is it replacing the traditional pipeline, I don't know that yet nor does anyone, but I don't see how they can flat out replace the old one and emulate it cause that would cause other issues too cause adaptive tessellation is fairly computational heavy when it affects the pixel shaders too, By itself adaptive tessellation and tessellation on AMD hardware on the shader units is cheap but when you factor in lighting and what not, its not that cheap when talking about millions of poly figures.

Yeah and the artwork I'm don't won't have pixel size polys even with tessellated active x 4. So you can take that 320k character, bump him up to x4, x4, x4, x4 polys, gets ya what ~82 million ridiculous numbers by if i'm correct, that is 2 times more that what Vega will be able to handle if using GS, and its going to put a huge problem its shader array if all the assets are going to be like that. It will be part of the game to go up that high, not recommended but if someone is stupid enough to do it, they can do it.
 
Last edited:
you stated it didn't use the traditional pipeline, I stated it it, and showed you it did with your linked paper, the only part that was circumvented was the GS which is AMD's problem area, but that doesn't mean their problems are over, if I decided to increase my base polygon counts in games by 2, AMD is screwed.

This is where we have our disagreement. Recent TressFX do not use the traditional geometry pipeline. A traditional front-end pipeline infers vertices being transformed to completed primitives. You had also claimed it uses Tessellation (front-end function), which it does not. Nowhere in source materials on TressFX does it show it, but they do specifically say to not use Tessellation since it's slow on GCN. You can actually see this in action easily with Tomb Raider, disabling Tessellation altogether via AMD driver settings, does not affect Lara's hair at all. With NV's HairWorks, it has a major effect when Tessellation factors are changed via AMD's drivers.

But besides the point, I don't know how AMD increases Vega geometry performance overall, they claim Primitive Shaders + Load Balancing, but we do not know how it works and this is the other disagreement. You think it will need specific dev coding to implement, I do not. That's it, we could be right or wrong, but speculate away. I don't pretend however to be certain and claim that AMD will handle it all via drivers, but it's a possibility.
 
May is 1, June is 2. Couple = 2 = June launch confirmed. lol

ps. It's silly how anyone can be so certain from such a vague statement, with "You will see the enthusiast gaming platform, the machine learning platform and the professional graphics platform very soon thereafter." .. what is definition of very soon???!

Very soon is not soon enough.
 
This is where we have our disagreement. Recent TressFX do not use the traditional geometry pipeline. A traditional front-end pipeline infers vertices being transformed to completed primitives. You had also claimed it uses Tessellation (front-end function), which it does not. Nowhere in source materials on TressFX does it show it, but they do specifically say to not use Tessellation since it's slow on GCN. You can actually see this in action easily with Tomb Raider, disabling Tessellation altogether via AMD driver settings, does not affect Lara's hair at all. With NV's HairWorks, it has a major effect when Tessellation factors are changed via AMD's drivers.

But besides the point, I don't know how AMD increases Vega geometry performance overall, they claim Primitive Shaders + Load Balancing, but we do not know how it works and this is the other disagreement. You think it will need specific dev coding to implement, I do not. That's it, we could be right or wrong, but speculate away. I don't pretend however to be certain and claim that AMD will handle it all via drivers, but it's a possibility.

I don't have TR, never really liked those types of games LOL. And also you can't look at hairfx and compare that to tress fx, AMD cards don't take a nose dive till they hit oh 40 million polys or so, so if the tessellation factors of tress fx isn't high enough to hit that, then you won't see a performance drop if you drop the factor amounts ;). Its a hard bottleneck too. That is why I'm specifically making the levels and characters and all assets to hit that 40 million level (Polaris) cause after that AMD is screwed, I would prefer if that limit wasn't there cause then I could go crazy, I can add much more detail. The assets I'm doing are already close to if not at cinematic levels, but I would prefer to do a little bit more, because my goal is to make animations and physics look more realistic instead of the stuff we see in movies where the joints, cloth and other things look a little fake, they move a bit too smooth, if you know what I mean. Now if you have used PN triangles in Unreal engine , you will know that you can turn the adaptive portion on and off, we are using that on.

Just can't do it simple through shader replacement, cause it will affect the end result of the pixel shader, so they need to replace everything, not just the GS portion. Showing it off in a demo is a quite different than in games where there is no one standard. Look at post under (after) the one you quoted. Everything single mesh will have a higher level of physics fluid simulation for EVERYTHING, even atmosphere. Yes all gasses are physically simulated, to give the lighting a more realistic effect (that screenshot doesn't show that, but its something we are working on). All of these things start from the mesh man. So I don't think there is anything AMD can do.
 
Last edited:
I don't wanna bash HairWorks, it relies on geometry & tessellation because NV GPUs are better at it. TressFX relies on Compute because AMD GPUs have more raw TFlops to spare. I just wanted to explain why the demo with RPM 2x FP16 actually works in TressFX because some ppl misunderstood thinking it's geometry & tessellation based when it's not.

Some people said that RPM 2x FP16 is useless for gaming, but with that demo, AMD proved that it can be useful (how useful, obviously up for debate). In the past, some talked about the HBCC being useless for gaming, but with 2-4x effective vram capacity, it can be very useful for cheaper 4GB SKUs, or heck, on their Vega APUs if they get around to using HBM2 for that. 2GB HBM2 Vega APU using it the HBM2 as cache, will be kick ass for notebooks & mITX. There's use cases if people just only had an open mind, instead of trying to shit on AMD at every opportunity.
Of course not, Razor1 should have been . It runs slow and is not a good solution. Even with two 1070s in Witcher 3 it sucks the frames down for almost zero reason at times. AMD solution works great on NVidia hardware, hairworthless doesn't work great on anything.
 
err Tress FX uses the hull shader and and tessellate, the compute shader replaces the GS portion of the shader code, The gs is what creates the adaptive portion of the adaptive tessellation ;), now the rest of it is not worth talking about because the topic isn't about tress fx vs hairworks and which looks better.
Yes not worth talking about. AMD said no tesselation or GS. Vertex in vertex out.
 
Back
Top