Vega Rumors

Yeah the whole tessellation thing is just AMD's pipelines stalling when a certain amount of triangles and over has to be rendered that has been their weakness since DX11 games started using tessellation. This is purely the amount of Geometry units they have. The only reason I can think of this, is that they allocate so much die space to the shader array they don't have space to put more geometry units in. I'm pretty sure the geometry units are decoupled with GCN. If they are not then that would be the issue but with GCN 1.3 they were able to increase the GU's from 2 to 4 so I'm thinking they are decoupled.
 
And developers in your pocket. Remember that little story about how AMD couldn't run hairworks in Witcher 3 without a severe performance dedregation and all the Nvidia guys rang the bell. Then later someone found a way to enable hairworks on AMD cards and it actually ran better on the AMD cards than the Nvidia cards?
Proprietary, vendor marketed/locked feature nonsense FTL.
So some random guy did a better job then AMD driver team. That is not proprietary lock out that is AMD shitty driver team. Same with Gameworks. NVIDIA does not cripple games AMD shitty driver team does.
 
Yeah the whole tessellation thing is just AMD's pipelines stalling when a certain amount of triangles and over has to be rendered that has been their weakness since DX11 games started using tessellation. This is purely the amount of Geometry units they have. The only reason I can think of this, is that they allocate so much die space to the shader array they don't have space to put more geometry units in. I'm pretty sure the geometry units are decoupled with GCN. If they are not then that would be the issue but with GCN 1.3 they were able to increase the GU's from 2 to 4 so I'm thinking they are decoupled.

The new excuse on AMD reddit is that UE4 is, thanks to nvidia conspiracy, not culling unseen geometry, and renders unnecessary polygons to stall GCN in a master plan to doom vega that is actually faster than GP102. They never disappoint.
 
Yes, doing a blatantly stupid decode would have huge impacts. Good thing we don't do that then, right? Yes, uOp caches are one aspect of making it a non issue.
Short version - of course x86/x64 decode could be a problem. But it isn't, and the amount of chip complexity dedicated to making sure it isn't is really not very big.

I'm glad the x64 extension looks trivial. That means we all did our job in making huge improvements pretty painless.

uop caches only reduce the issue, but not eliminate it fully. There situations where the pipeline cannot be feed from the uop cache and instructions have to go through the decode stage. Moreover adding a uop cache adds to the complexity of the design. Ask AMD about the uop cache bug in Zen. ;)

Thus we have a bloated decoder and then we add more bloat to the microarchitecture (uop cache desing) to partially reduce the decoder overhead. It is much simpler, elegant, and efficient to eliminate the problem with a better ISA.

Maybe I didn't express myself clearly, but I did mean that AMD's development of x64 as an extension to 32bit was a trivial task compared to an alternative approach of developing a separate and clean 64bit ISA from scratch. Such alternative task would require AMD engineers to think about what to eliminate from the ISA, what to leave, and seek for ways to improve efficiency and elegance of the whole design. It is not causal that x86 engineers at AMD feel "a little daunted" by the ARM64 ISA.
 
The new excuse on AMD reddit is that UE4 is, thanks to nvidia conspiracy, not culling unseen geometry, and renders unnecessary polygons to stall GCN in a master plan to doom vega that is actually faster than GP102. They never disappoint.
Even an OC'ed 1070 can match Vega 64 in several DX11 titles!
 
The new excuse on AMD reddit is that UE4 is, thanks to nvidia conspiracy, not culling unseen geometry, and renders unnecessary polygons to stall GCN in a master plan to doom vega that is actually faster than GP102. They never disappoint.


Really funny when the source code can have its own branches if AMD wishes too!
 
uop caches only reduce the issue, but not eliminate it fully. There situations where the pipeline cannot be feed from the uop cache and instructions have to go through the decode stage. Moreover adding a uop cache adds to the complexity of the design. Ask AMD about the uop cache bug in Zen. ;)

Thus we have a bloated decoder and then we add more bloat to the microarchitecture (uop cache desing) to partially reduce the decoder overhead. It is much simpler, elegant, and efficient to eliminate the problem with a better ISA.

Maybe I didn't express myself clearly, but I did mean that AMD's development of x64 as an extension to 32bit was a trivial task compared to an alternative approach of developing a separate and clean 64bit ISA from scratch. Such alternative task would require AMD engineers to think about what to eliminate from the ISA, what to leave, and seek for ways to improve efficiency and elegance of the whole design. It is not causal that x86 engineers at AMD feel "a little daunted" by the ARM64 ISA.

I worked on the chips in question, and I assure you I am quite familiar with the tradeoffs involved.

For desktop and even laptop usage, it is irrelevant. It's a tiny portion of the die to support this "bloated decoder". It's really not a big deal - it's a fractional percent of die size, and power. In return you get an incredibly mature optimizing compiler, and software compatibility. Those have historically far outweighed the small silicon cost.

Where it can impact is mobile - where your'e working in incredibly constrained power cases. Thus, x86/x64 is a non-factor in those cases, and yes, even a little more complex decoder can make you an also-ran.
 
They murdered the AA on AMD to make hairworks work better.... not like it was free.

The vast majority of gameworks has the same effect on both nVidia and AMD. If there is a difference it's usually fixed after a few patches. nVidia could have made gameworks not work at all on AMD if they really wanted.

Gameworks is basically plug and play for AMD. AMD's tech is almost never plug and play for nVidia.

Watch this about hairworks

https://m.twitch.tv/amd/v/5335751?desktop-redirect=true#

As to sharing tech. Your statement is rubbish and if you don't know better you are either new here or have fanboy blinders on.

Physix
hairworks
HBOA
CUDA

Arguably gsync
 
Last edited:
This is rubbish and if you don't know better you are either new here or have fanboy blinders on.

Physix
hairworks
HBOA
CUDA

Arguably gsync
Not entirely sure what your point is? If the argument is they are proprietary then yes absolutely. If it is that is somehow bad then I disagree.
 
Man it would be wonderful if we could see that parallel universe where it was Intel that created x86-64, and see how your statement would differ. Only you have ever downplayed every AMD advancement as "trivial". That has got to be a lot of work ensuring that every positive AMD post gets quashed with irreverent disposition across the web. I wonder if anyone takes you seriously? Seeing posts like this just seems obvious to me you have an agenda, and it isn't the truth or reason.


This had a lot more to do with MS's direction than Intel's, Intel should have been smart enough to know what MS would have wanted, Itanium was just a real stupid move by Intel, they wanted to demarcate the business world apart from the general consumer thus giving them more leverage on pricing on their chips. That doesn't work when both business and consumers were using Windows as their primary OS. Intel's arrogance, greed and followed by stupidity got the better of them.

Keep this in mind Intel spent billions on developing Itanium they wanted to pressure MS but couldn't, when that didn't work they spent billions more to get software dev's to make software for Itanium, that too fell flat. It was just too different of an approach for developers that the time to rewrite their software for something like 15 or 20% performance boost wasn't worth it.

AMD's x64 although not trivial, it was a mile stone and really did help bridge the gap of general consumers getting much more powerful and capable systems, it was because of Intel's follies with Itanium with MS's blessing of AMD's x64 that helped them succeed. Then there was Pentium 4 for the general consumers, Intel was going in the wrong direction there too. Lets just say Intel used the PII as their base and just created their Pentium M line instead of Pentium 4, what would have happened to AMD Athlon then?

This is a what if but it wasn't like Intel was missing anything in their vast IP, they just took the wrong directions. And this is what they did once Pentium 4 failed. Then we got Nehalem and a few more chips which were all better than Athlon and Phenom, before we say the first truly reworked architecture Ivy Bridge.

Now it took AMD a decade to come back from that, we have no damn clue what IP Intel has been working on in 10 years, they have been milking Ivy Bridge since then. But what ever is in their development pipeline there is nothing they can do to change it in the short term, but in the mid term, like 2 more years, they can change quite a bit. Coffee lake was always slated to be a 6 core part, it was when they first starting designing, it was not a reaction to AMD's Ryzen. Coincidentally right around the same time Ryzen comes out with 8 cores, why? Economics fits for that? I don't think it was a coincidence, that Intel finally went to 6 core and AMD has 8 core at the same time.

Funny thing is when ever we see a tech company that is integrated deeply in our personal systems like Intel, AMD, nV, etc, succeed or fail they can all be pointed to MS's direction most of the time. This is why software and hardware go hand in hand. Can't have the one with the other.
 
Last edited:


Oddly enough AMD was well aware Witcher 3 was using hairworks close oh a year before. They knew it was coming. It was Hallock and Roy Taylor, mostly Roy, that said they had no idea hairworks was coming with Witcher 3 lol. They knew it and were well aware of it. So then this video was done, saying they didn't have enough time to work with it.

Problem is, any game that uses more triangles hurts AMD. There is no if and buts about it. Doesn't matter if its gameworks or not.

Cry Engine 2 and 3 didn't use hairworks, and with their tessellation its hurt AMD cards!

When a company complains when it doesn't have time to do something, most likely that can't do anything about it anyways.

Anycase both companies try to play this role to different degrees. Async compute comes to mind too right? Maxwell had issues with it. Issues that AMD stated was purely architectural and their hardware will have an advantage. It did to a certain degree, but they also blamed nV stating their architecture was not capable of doing it and to this day people still say it doesn't with Pascal. Amd also pointed to its scheduler lol, and that was what helped it to do async compute and nV's lack there of is what is stopping them from doing it, which was absolute BS, it wasn't a problem with the scheduler, it was a problem with how the SM's were set up in Maxwell. That had nothing to do with the scheduling of the instructions and was fixed up to a certain degree in Pascal, granularity wise its still behind GCN, but small steps gets things done.
 
Last edited:
Oddly enough AMD was well aware Witcher 3 was using hairworks close oh a year before. They knew it was coming. It was Hallock and Roy Taylor, mostly Roy, that said they had no idea hairworks was coming with Witcher 3 lol. They knew it and were well aware of it. So then this video was done, saying they didn't have enough time to work with it.

Problem is, any game that uses more triangles hurts AMD. There is no if and buts about it. Doesn't matter if its gameworks or not.

Cry Engine 2 and 3 didn't use hairworks, and with their tessellation its hurt AMD cards!
In all fairness crisis 2 did you use tessalation in the most absurd ways possible.
 
In all fairness crisis 2 did you use tessalation in the most absurd ways possible.


actually it didn't, when viewing in wireframe mode it looked that way. But because the pixels weren't occluding anything it will over tessellate everything with the tessellate flag. When not in wireframe mode that does not occur, I talked to one of the dev's of Cry Engine and he stated that the first articles about it got it all wrong. They didn't' understand how the engine worked. At that time I was developing a game on Cry Engine too. So yeah I saw the difference he was talking about, when in wireframe mode with debug mode on you can see the triangle counts jump up much higher then when not in wireframe mode.
 
actually it didn't, when viewing in wireframe mode it looked that way. But because the pixels weren't occluding anything it will over tessellate everything with the tessellate flag. When not in wireframe mode that does not occur, I talked to one of the dev's of Cry Engine and he stated that the first articles about it got it all wrong. They didn't' understand how the engine worked. At that time I was developing a game on Cry Engine too. So yeah I saw the difference he was talking about, when in wireframe mode with debug mode on you can see the triangle counts jump up much higher then when not in wireframe mode.
Did not know that and that is certainly interesting.
 
Marketing bullshit. NVIDIA is no way required to optimize their software designed for their hardware for their competition. Whiners blam their compettionfor their inability to compete. If AMD was able to deliver the same visiuals through their own software suite they would.
There is a bit of that, but the theme is that Witcher 3 enabled 64x tessalation when no visible change occurred over 16x tesellation which AMD did just fine. So 64x penalized AMD on benchmarks (and older nvidia hardware too) but with no appreciable increase in quality. It was a false, no benefit setting designed to put the newest Nvidia chipsets in the strongest light for marketing purposes.

I'll agree both parties have twisted the truth, but I'll contend Nvidia definately does so more regularly, and more heavy handidly pushes the proprietary nonsense.

(Remember Nvidia disabling Physix when an AMD card was present in the System)
 
Last edited:
Did not know that and that is certainly interesting.


Yeah Cry Engine 2 was using adaptive tessellation and everything with adaptive tessellation is based off of the FOV, so if there is no occlusion everything is going to be rendered.

Hence why that water was always tessellated. All maps that had any amount of ocean in Cry Engine that plane was going to be rendered and tessellated in wireframe mode. There was no way to turn off portions of it, just wasn't set up that way.
 
Marketing bullshit. NVIDIA is no way required to optimize their software designed for their hardware for their competition. Whiners blam their compettionfor their inability to compete. If AMD was able to deliver the same visiuals through their own software suite they would.

Nvidia did some shit like, oh, disable Physx if it detects an AMD card in the system, even if you also had an Nvidia card to run Physx with. Which by the way, is bullshit anyway. When they acquired the Physx property, they retroactively added the feature to year or two old chips, so it's not like they require special hardware to run it!

Taking cool shit and making it closed source, then pushing it into games (and something like game physics which can fundamentally change game mechanics/interactions) is fucked up and straight anti-competitive.

Not saying AMD is a saint, but Jesus, really?

Also, Nvidia could have been happy with making money on G-sync licensing and hardware sales (the modules, they could make money on those) in each monitor and let AMD run it too, giving themselves an even bigger market for those monitors.

But no, no, no, they won't.
 
There is a bit of that, but the theme is that Witcher 3 enabled 64x tessalation when no visible change occurred over 16x tesellation which AMD did just fine. So 64x penalized AMD on benchmarks (and older nvidia hardware too) but with no appreciable increase in quality. It was a false, no benefit setting designed to put the newest Nvidia chipsets in the strongest light for marketing purposes.

I'll agree both parties have done it, but I'll contend Nvidia is definately more proprietary and sneaky (shady) in the games they play.

(Remember Nvidia disabling Physix when an AMD card was present in the System)


There was a difference, when playing yeah can't really see it, but in screenshots, and in game cinematic, you can see it. @x16 the hair looked like grass. Thin and sparse with odd angles.
 
Nvidia did some shit like, oh, disable Physx if it detects an AMD card in the system, even if you also had an Nvidia card to run Physx with. Which by the way, is bullshit anyway. When they acquired the Physx property, they retroactively added the feature to year or two old chips, so it's not like they require special hardware to run it!

Taking cool shit and making it closed source, then pushing it into games (and something like game physics which can fundamentally change game mechanics/interactions) is fucked up and straight anti-competitive.

Not saying AMD is a saint, but Jesus, really?

Also, Nvidia could have been happy with making money on G-sync licensing and hardware sales (the modules, they could make money on those) in each monitor and let AMD run it too, giving themselves an even bigger market for those monitors.

But no, no, no, they won't.


Making cool shit and giving that cool shit code to developers for easy integration costs money. Much more money than what AMD has ever put into their game dev programs. Its a circle dude. Gameworks is not just a demo, its a library for integration, it takes time to make those things so its easy to integrate. Anyone can make a demo. AMD made a demo of Tress FX before, nV made a demo of Hairworks, but Tress Fx wasn't as easy to integrate, so dev's starting using Hairworks. Tress Fx was using in and shown off in Tomb Raider right? If I remember correctly, that was 2013, first game to use Hairworks was Witcher 3, which was 2015. Tress Fx 1.0 which was used in TR didn't have the all the functionality that Hairworks had, nor did AMD support it much, and integration wasn't that great either, the process was laborious. This is something nV has been always keen on when "helping" dev's so the integration is done easily. Granted there has been bad examples, most likely rushed to release issues, like Batman AK. But mostly they go off with a hitch.


They aren't in the monitor market, they only want to sell their cards, and stop their competitors ;)
 
Last edited:
There is a bit of that, but the theme is that Witcher 3 enabled 64x tessalation when no visible change occurred over 16x tesellation which AMD did just fine. So 64x penalized AMD on benchmarks (and older nvidia hardware too) but with no appreciable increase in quality. It was a false, no benefit setting designed to put the newest Nvidia chipsets in the strongest light for marketing purposes.

I'll agree both parties have twisted the truth, but I'll contend Nvidia is definately does so more more regularly and more heavy handidly pushes the proprietary nonsense.

(Remember Nvidia disabling Physix when an AMD card was present in the System)

x32 Tesellation in TW3 looked somewhat acceptable while running in the wild however still very noticeable in dialogues, and a in a game based a lot on dialogues it was a gamebreaking.. anything lesser than that and it was horribly.. all the screenshots comparisons made online initially were purposefully to show "no difference" by AMD guys so they couldn't felt so bad about the huge performance penalty. however that performance penalty mas more related to the hairworks AA than the Hairworks itself.. then it was fixed later by CDProjectRed by an slider that adjusted the Hairworks AA.
 
x32 Tesellation in TW3 looked somewhat acceptable while running in the wild however still very noticeable in dialogues, and a in a game based a lot on dialogues it was a gamebreaking.. anything lesser than that and it was horribly.. all the screenshots comparisons made online initially were purposefully to show "no difference" by AMD guys so they couldn't felt so bad about the huge performance penalty. however that performance penalty mas more related to the hairworks AA than the Hairworks itself.. then it was fixed later by CDProjectRed by an slider that adjusted the Hairworks AA.


With Hairworks AA is, AMD's chips use more bandwidth, AA more triangles, more bandwidth needed. Another weakness that was exploited.

Anything and everything that could hurt AMD with triangles was done lol, and the only way around it was to have separate settings.

Why was it done, that is business. Just like AMD did with async compute.

is it malicious code, no, its just the way it is.

Now AMD called foul on nV for not having async compute, but now it doesn't even matter even on their poster child game AOTS, nV's Maxwell architecture still is comparable to GCN equivalents even when they are running async compute and Maxwell is not. My understanding is nV did quite a bit of driver work to get that performance out of that game. For a game that has so few users now nV took it upon themselves, when they had time to do the work. Has AMD done the same with DX11 driver overhead? It would help much more than just one title, it was noticed in oh 2015. Still no changes.

nV is proactive because they can be and have the money to do it (even when they were bleeding money they pushed it to do it, a risk very few companies would take because its way different than the actual business plan). AMD can't do it because of the lack of resources. To get those resources they need to make better products, but when they had good products they weren't able to capitalize on it, even when ATi was around. Another words, this is a management problem, now that management is gone, but still doing the same things? We see it with HSA, RoCm, all the same tactics, its a blame game, then oh its not working lets go to something else. They need to invest money and sustain that investment without changing course mid way through because nothing has happened in short term.
 
Last edited:
The Crysis thing has been debated ad-naseum, but if you go digging on the AMD subreddit, you can find a curious case that hints at tesselation working even when it should be culled.

The user is playing Crysis 2 looking at a door. All you can see is the door and some of the frame around it.

With different level of tesselation, he had *massively* different performance. There was no visual difference in detail for any setting, because he was just looking at a door.

I think that pretty definitively proves that tesselation, to some extent, is abused in that game in a way that benefits Nvidia. I'm not saying it's intentional or they were paid for it, but it definitely happened. Unnecessary tesselation or at the very least a terrible culling implementation.

Witcher 3, there is genuinely no visual difference between 32x and 64x in motion or otherwise.

Fallout 4 is similar. Unnecessary levels of tesselation for their godrays.
 
  • Like
Reactions: N4CR
like this
The Crysis thing has been debated ad-naseum, but if you go digging on the AMD subreddit, you can find a curious case that hints at tesselation working even when it should be culled.

The user is playing Crysis 2 looking at a door. All you can see is the door and some of the frame around it.

With different level of tesselation, he had *massively* different performance. There was no visual difference in detail for any setting, because he was just looking at a door.

I think that pretty definitively proves that tesselation, to some extent, is abused in that game in a way that benefits Nvidia. I'm not saying it's intentional or they were paid for it, but it definitely happened. Unnecessary tesselation or at the very least a terrible culling implementation.

Witcher 3, there is genuinely no visual difference between 32x and 64x in motion or otherwise.

Fallout 4 is similar. Unnecessary levels of tesselation for their godrays.


Depends on how the portal was setup. and how close he was to the door, I know it wasn't abused, cause as I said I developed on that engine, and full well know how it functions, if he was right there with his nose on that door. It will get over tessellated that is how adaptive tessellation works. Closer to the camera objects get more tessellated the entire object and since its a flat object with out curves, well adaptive tessellation is likely failing because it takes the angle of the camera into consideration to do the adaptive portion. Not just what is in the FOV. There is no control over that once that happens, those are outliers that can't really be fixed.
 
AMD didn't donate Mantle, they were forced to after it's failure, it had loads of bugs, didn't work well on new GCN hardware, needed constant driver involvement, and always came in late after months of game launch. The original plan was to have 15 Mantle games, it only got 7. The rest of the games didn't even bother and got released with DX11 (not even Vulkan or DX12).

DX12 is also on a similar trajectory, 2 years now and only 13 games, no games in sight other than Forza 7 for a long time. Developers actually hate it, they still see DX11 as the more reliable option, the only games that have DX12 are those that AMD pays for them, or those that Microsoft forces DX2 upon them. The rest of the industry just ignores DX12. And Vulkan, same thing. Very few games get released on PC with Vulkan (3?), and AMD pays for those too.

While I have declared vega a giant let down. You sound like the ooposite of AMD fanboy. Mantle was not a fuckin failure. By that means Vulkan is a fuckin failure? Seriously love the fanboy wars!
 
While I have declared vega a giant let down. You sound like the ooposite of AMD fanboy. Mantle was not a fuckin failure. By that means Vulkan is a fuckin failure? Seriously love the fanboy wars!


Vulkan still hasn't taken off, only one marquee game ;) and Id tech is not what most dev's use, unlike in the past.
 
Watch this about hairworks

https://m.twitch.tv/amd/v/5335751?desktop-redirect=true#

As to sharing tech. Your statement is rubbish and if you don't know better you are either new here or have fanboy blinders on.

Physix
hairworks
HBOA
CUDA

Arguably gsync

You got me with Cuda lol. I was thinking Mantle was far worse than the other examples but Cuda is a good one... gimmicks you can shut off don't count to me. Cuda is more of a dick move than mantle I suppose.
 
  • Like
Reactions: N4CR
like this
Vulkan still hasn't taken off, only one marquee game ;) and Id tech is not what most dev's use, unlike in the past.
That's because zenimax refuses to let them license the engine unfortunately. id Tech 6 is incredible, the visual quality and relative low system requirements are amazing.
 
Making cool shit and giving that cool shit code to developers for easy integration costs money. Much more money than what AMD has ever put into their game dev programs. Its a circle dude. Gameworks is not just a demo, its a library for integration, it takes time to make those things so its easy to integrate. Anyone can make a demo. AMD made a demo of Tress FX before, nV made a demo of Hairworks, but Tress Fx wasn't as easy to integrate, so dev's starting using Hairworks. Tress Fx was using in and shown off in Tomb Raider right? If I remember correctly, that was 2013, first game to use Hairworks was Witcher 3, which was 2015. Tress Fx 1.0 which was used in TR didn't have the all the functionality that Hairworks had, nor did AMD support it much, and integration wasn't that great either, the process was laborious. This is something nV has been always keen on when "helping" dev's so the integration is done easily. Granted there has been bad examples, most likely rushed to release issues, like Batman AK. But mostly they go off with a hitch.


They aren't in the monitor market, they only want to sell their cards, and stop their competitors ;)

They stepped into the monitor market with a proprietary hardware integration piece, so don't play coy.

NV holds the hands of developers with money, gotta spend money to make money right? But the black box approach is anti-competitive behavior when it excludes your competitor from optimizing to these games where some of these features are showcased in the press and benchmarks.

Backhanded business practices, like Intel bending Dell's arm when it came to selling AMD chips back in the day, or having exclusions in compilers that wouldn't let AMD run the same codepaths despite having the features on their CPU's.

It's all to worship the mighty dollar, whoever wins, no matter how, it doesn't matter?

What a piece of work.
 
  • Like
Reactions: N4CR
like this
They stepped into the monitor market with a proprietary hardware integration piece, so don't play coy.

NV holds the hands of developers with money, gotta spend money to make money right? But the black box approach is anti-competitive behavior when it excludes your competitor from optimizing to these games where some of these features are showcased in the press and benchmarks.

Backhanded business practices, like Intel bending Dell's arm when it came to selling AMD chips back in the day, or having exclusions in compilers that wouldn't let AMD run the same codepaths despite having the features on their CPU's.

It's all to worship the mighty dollar, whoever wins, no matter how, it doesn't matter?

What a piece of work.


yeah and it does more than freesync and there is validation. I'm not playing coy, they don't give a shit about AMD and will do anything and everything to shut them out of a market. Simple as that, that is business, if AMD can't compete because they have weaker products currently that is their own fault, not nV's. Consumer choice only matters when you are locked in, doesn't matter if its freesync or gsnyc, either you are locked into AMD cards or nV cards unless nV wants to support open sync standards, but for them they don't need to, they are the market leaders and that is that.

There is nothing backhanded as long as they don't force a consumer to only choose their products. Pretty much what Intel did, nV has not done anything like that, ever! If a consumer wants a AMD card, they can still use it on a Gsync monitor, just won't have sync tech, But still usable. Same goes for AMD's Freesync monitors. They don't lock it to anyone, just that nV doesn't support freesync, and its their choice to do it or not to do it.

Exactly both companies worship the almighty dollar and will do what ever it takes to get that dollar out of our pocket, don't look at AMD's open standards BS as better, they are only doing that because they have no DAMN choice, they don't have the money nor the market presence to influence their business relationships, or developers to their side, when they can't do that, they wrap it up as "open source". Its because they don't have the resources to do things on their own. They did it with Mantle, they did it with freesync, they did it with tress fx, they did with HSA, they did it with RoCm, and many more programs. The only one that was successful to some degree was freesync and that is because there is a vested interest in the monitor manufacturing companies they too have money invested in that tech (not directly but the monitor itself), all others have not taken off or failed or replaced. That is the down fall of open source or open projects if the company that is the main proponent for such an initiative doesn't spend money or resources on it more than the partnering companies, it will fail. They need to be the driving force, and if they are not it fails.

Put your feet in a dev's shoes, working with HSA and RoCm, those API's are good, but not as good as CUDA as some of the features in CUDA really help speed, now to get that type of performance out of HSA and RoCm, devs have to spend more time with those products and get those API's to better support certain libraries. Will you as a Dev or company owner want to allocate time, money, resources, to an open source project, when you can spend the money on your own project and get CUDA and nV hardware, and we know man hours are the killer in software projects not the initial hardware costs. This is exactly where AMD's open source initiatives fail, they don't innovate fast enough, to do it faster they need to do more of the work themselves, which costs money, or give guidance to that work and have partners sign off on a time line for feature implementation, but that means AMD needs to strong arm their partners into a cohesive agreement on who is doing what and when will it be done. They don't have the muscle to do that. This is why every single one of their open source game dev programs failed.

Boo hoo, AMD can't get anywhere in these types of things, too bad its their own fault they are in the position they are in.
 
Last edited:
yeah and it does more than freesync and there is validation. I'm not playing coy, they don't give a shit about AMD and will do anything and everything to shut them out of a market. Simple as that, that is business, if AMD can't compete because they have weaker products currently that is their own fault, not nV's. Consumer choice only matters when you are locked in, doesn't matter if its freesync or gsnyc, either you are locked into AMD cards or nV cards unless nV wants to support open sync standards, but for them they don't need to, they are the market leaders and that is that.

There is nothing backhanded as long as they don't force a consumer to only choose their products. Pretty much what Intel did, nV has not done anything like that, ever! If a consumer wants a AMD card, they can still use it on a Gsync monitor, just won't have sync tech, But still usable. Same goes for AMD's Freesync monitors. They don't lock it to anyone, just that nV doesn't support freesync, and its their choice to do it or not to do it.

Exactly both companies worship the almighty dollar and will do what ever it takes to get that dollar out of our pocket, don't look at AMD's open standards BS as better, they are only doing that because they have no DAMN choice, they don't have the money nor the market presence to influence their business relationships, or developers to their side, when they can't do that, they wrap it up as "open source". Its because they don't have the resources to do things on their own. They did it with Mantle, they did it with freesync, they did it with tress fx, they did with HSA, they did it with RoCm, and many more programs. The only one that was successful to some degree was freesync and that is because there is a vested interest in the monitor manufacturing companies they too have money invested in that tech (not directly but the monitor itself), all others have not taken off or failed or replaced. That is the down fall of open source or open projects if the company that is the main proponent for such an initiative doesn't spend money or resources on it more than the partnering companies, it will fail. They need to be the driving force, and if they are not it fails.

Put your shoes in a dev's shoe, working with HSA and RoCm, those API's are good, but not as good as CUDA as some of the features in CUDA really help speed, now to get that type of performance out of HSA and RoCm, devs have to spend more time with those products and get those API's to better support certain libraries. Will you as a Dev or company owner want to allocate time, money, resources, to an open source project, when you can spend the money on your own project and get CUDA and nV hardware, and we know man hours are the killer in software projects not the initial hardware costs. And this is exactly where AMD's open source initiatives fail, they don't innovate fast enough, to do it faster then need to do more of the work themselves, or give guidance to that work and have partners sign off on a time line for feature implementation, but that means AMD needs to strong arm their partners into a cohesive agreement on who is doing what and when will it be done. They don't have the muscle to do that. This is why ever single one of their open source game dev programs failed.

Boo hoo, AMD can't get anywhere in these types of things, too bad its their own fault they are in the position they are in.

And we will suffer for it.

If Intel hadn't fucked AMD all those years ago, things would be a lot different I think.

AMD is at least putting forth more effort now to get tools out there, it'll be a game of catch up though.

And depending on the needs, the AMD compute cards can still make sense to certain markets. Especially since they're thirstier for contracts, so I wouldn't write them off entirely.
 
And we will suffer for it.

If Intel hadn't fucked AMD all those years ago, things would be a lot different I think.

AMD is at least putting forth more effort now to get tools out there, it'll be a game of catch up though.

And depending on the needs, the AMD compute cards can still make sense to certain markets. Especially since they're thirstier for contracts, so I wouldn't write them off entirely.
the intel and amd thing is completely different. AMD is behind NVIDIA not because of underhanded tactics by NVIDIA, but because AMD just has not had any competitive products in a long time. There is nothin underhanded about Gamerworks, it is a proprietary software suite designed to leverage NVIDIA hardware. We used to call that a competitive advantage not these bogus accusations of anti competitiveness.
 
And we will suffer for it.

If Intel hadn't fucked AMD all those years ago, things would be a lot different I think.

AMD is at least putting forth more effort now to get tools out there, it'll be a game of catch up though.

And depending on the needs, the AMD compute cards can still make sense to certain markets. Especially since they're thirstier for contracts, so I wouldn't write them off entirely.


We don't have a choice man, and Intel got punished for it, although it was a slap on the wrist IMO, what are we going to do, sue Intel? We could but it won't get far cause they already got punished.

AMD cards right now in the current iterations of RoCm and HSA, are not worth it at all to develop on in DL, or HPC markets and that is why we don't see many of them being used, their marketshare is less than 5% probably in those markets. They aren't written off, but there needs to be much more work done on the software side of things, once that work is done then they will be able to push faster. But the problem is nV is creating new features for those said libraries and making CUDA iterations faster than AMD is with HSA or RoCm. Every single generation of cards nV is adding new extensions and features that are accelerating timelines to market and enhancing developer software features with CUDA. Its tough to compete with open source initiatives with a closed source project when timelines are concerned. Closed source has a clear directive and leadership which can be pushed, open source is much harder to do that because the people doing the work aren't being paid to do the work.

We can take this all the way back to the origins of OGL, nV did the same thing they did with OGL as they are doing with CUDA now. They spent the time and resources to keep OGL going, they took that leadership position. They are doing it with Vulkan now too. Why was there a role reversal with DX and OGL to DX now and Vulkan with AMD and nV is concerned? The API doesn't matter right? And it doesn't its the underlying hardware and software that matter. This is the case nV shouldn't be at a disadvantage. Initially their drivers for Vulkan were lacking many features, they have gotten more features in and closed the gap. They will do more and slowly they will take the lead again. There is a time and place to do things, and when we see nV do things many times over as they have always done, create a strong base and move forward they succeed. They were poor with DX9 with FX, one gen later they almost caught up with AMD in DX 9 with the 6800, ATi then brought out a much better product with the x1x00 series. nV had their 7x00 series which fared well too. Then with DX10 that was pretty much all nV till they slowed down. DX11 they took that lead again. DX12 looked to be AMD's, but that didn't last long. All the while AMD with OGL, they rewrote their drivers twice and they didn't make any headway. Vulkan is boon for them cause now they don't need to do the work anymore lol. No pressure, but if they don't push for things their way in an open committee and have the leverage to push for it, its going to go towards nV.

Yes if Intel didn't do what they did or was fined more than what they did get fined, things will be much different, the actual ramifications of what Intel did, was not 2 billion bucks, its more like 10 or 15 billion bucks. AMD agreed on 2 billion because they were desperate for money at that time. They could not pursue it in court for a long time so they settled.
 
Last edited:
  • Like
Reactions: N4CR
like this
HD 7970 slower than GTX 680
HD 290X slower than GTX 780Ti/Titan Black
FuryX slower than 980Ti
Vega 64 massively slower than 1080Ti/TitanXp

See the pattern? with Vega AMD has now come full circle to being massively behind. And no amount of driver enhancements will do anything to dissolve this NVIDIA lead, They are just behind like always, It's just worse this time.

Clearly you have never owned some of the those cards and are pretty biased in your evaluation.
7970 was released to the 580, not the 680 which came few months later. Even so, it was a little slower on launch but picked up and overtook the 680 and beats it with more margin than the 680 beat it on launch. SO it was faster than a newer card. Same happened with the old X800XT.
Same goes for the 290x vs 780Ti - compare AIB to AIB and the 780Ti is noticeably slower, let alone in VRAM limited scenarios.

Lovely to see Intelsharesjuan bringing the muh Intel stronk/AMD is slower than a Cyrix 333Mhz BS into a video card thread. Piss off and make an i9 prediction blog, will you :/
 
Why is AMD's die getting relatively larger than nV's while nV is able to keep higher throughput and increase Flops figures and ALU amounts? Wierd right, and AMD is supposed to be on a smaller node 14nm.......
Because AMD do not have the budget to make specialised dies for consumer grade products like Nvidia sometimes does. So, they make a few dies and cut them as needed for the entire range, from consumer to professional.
E.g. Polaris has SSG capabilities baked in for a niche market, to every chip. Those transceivers will take up some die space, Nvidia does not offer this functionality anywhere to my knowledge.
Same with Vega - has 500gb/sec IF for SSG capabilities, even more room needed for that sort of speed amongst other applications and circuitry.

Bit like the wide voltage bins AMD uses. I don't remember seeing in recent years an AMD card that wouldn't OC at least a little bit full stop. Meanwhile I have experienced that in the past when they were binning the cherry picked chips e.g. X800XT PE, had one (X800XT) that wasn't stable even +20MHz at stock volts, because they'd be binned. Meanwhile these days, many people can undervolt with decent margins, but yes of course it's reverse OC basically.
I'd be curious if users here have a Polaris/Vega that doesn't OC at all, anyone?

Another similar example is the Xeons - they are released to us plebs with decent chunks of die disabled, as they make proprietary circuits available for large customers on an NDA basis e.g. Amazon, Faceberg etc. We don't know what they do, other than they are there and disabled for us. Wonder if it's some NSA perve passthrough lol.

Gameworks is basically plug and play for AMD. AMD's tech is almost never plug and play for nVidia.
But it should be trivial for them to throw $ at it, lazy driver team argument etc goes both ways here I guess.
 
Because AMD do not have the budget to make specialised dies for consumer grade products like Nvidia sometimes does. So, they make a few dies and cut them as needed for the entire range, from consumer to professional.
E.g. Polaris has SSG capabilities baked in for a niche market, to every chip. Those transceivers will take up some die space, Nvidia does not offer this functionality anywhere to my knowledge.
Same with Vega - has 500gb/sec IF for SSG capabilities, even more room needed for that sort of speed amongst other applications and circuitry.

Bit like the wide voltage bins AMD uses. I don't remember seeing in recent years an AMD card that wouldn't OC at least a little bit full stop. Meanwhile I have experienced that in the past when they were binning the cherry picked chips e.g. X800XT PE, had one (X800XT) that wasn't stable even +20MHz at stock volts, because they'd be binned. Meanwhile these days, many people can undervolt with decent margins, but yes of course it's reverse OC basically.
I'd be curious if users here have a Polaris/Vega that doesn't OC at all, anyone?

Another similar example is the Xeons - they are released to us plebs with decent chunks of die disabled, as they make proprietary circuits available for large customers on an NDA basis e.g. Amazon, Faceberg etc. We don't know what they do, other than they are there and disabled for us. Wonder if it's some NSA perve passthrough lol.


But it should be trivial for them to throw $ at it, lazy driver team argument etc goes both ways here I guess.

nV uses their desktop chips as Quadro's and Tesla's too, so don't think that it. I think GCN just needs more transistors for what it does, and its because of the way the shader array is set up. Even with HBM which reduces the bus size they still have a larger chip. something is eating up all that space.

SSG shouldn't need anything extra in the silicon to make it work, its just a caching system that needs to be reworked, most of that is driver work. Its just the need to hide the latency when accessing the data from the integrated HD.

nV just didn't go that route because companies that need that type of hardware will be buying more than just one card with a HD, why spend 10000k on one card that will really only limit the work when you can uses a rack that will give you much better results and faster times to render. Yeah the cost will be higher, but a project that will require that speed will be quite an expensive project. Its not like its going o be for youtubers lol, 10k is quite a bit for a youtuber to spend unless they are getting that many views.

undervolting is a hit or miss, in all my mining rig cards, they really flucuate on how much I can undervolt. My rx 580 rigs I have some I can only undervolt -12mV which is nothing lol. Some go to 60mV less. But its all over the place.
 
nV uses their desktop chips as Quadro's and Tesla's too, so don't think that it. I think GCN just needs more transistors for what it does, and its because of the way the shader array is set up. Even with HBM which reduces the bus size they still have a larger chip. something is eating up all that space.

SSG shouldn't need anything extra in the silicon to make it work, its just a caching system that needs to be reworked, most of that is driver work. Its just the need to hide the latency when accessing the data from the integrated HD.

nV just didn't go that route because companies that need that type of hardware will be buying more than just one card with a HD, why spend 10000k on one card that will really only limit the work when you can uses a rack that will give you much better results and faster times to render. Yeah the cost will be higher, but a project that will require that speed will be quite an expensive project. Its not like its going o be for youtubers lol, 10k is quite a bit for a youtuber to spend unless they are getting that many views.

500gb/sec interlinks would require some serious transceivers, much akin to the memory interface - same speed or double amount of them really! I don't see that being a nothing extra silicon wise. If it was was just caching, then the GPU would thrash memory bandwidth if both are contending while processing data and it would be a waste of time in any workload requiring processing, would it not?

IIRC the main customers for SSG cards are oil/gas and people with huge datasets. I still can't wait to see if it ends up in Navi and what implications will be for game development, sudden texture changes and teleporting around massive persistent worlds would be possible... e.g. portal 3 open world or something..

undervolting is a hit or miss, in all my mining rig cards, they really flucuate on how much I can undervolt. My rx 580 rigs I have some I can only undervolt -12mV which is nothing lol. Some go to 60mV less. But its all over the place.

Hah 12mV is pretty crappy. What would you say is the average and what sample size roughly (if you care to say lol).
 
Back
Top