Unreal Engine and Vulkan Better Suited for AMD

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,634
Why hardly a deep dive into exactly why, it is interesting to see an Unreal Engine developer state on their forums that AMD is better suited to run Vulkan rendering commands than NVIDIA.


On AMD we are faster than D3D11, once the slides from my GDC talk are out you can see the numbers.

Means the way we are generating Vulkan commands seems to be better suited for AMD for some reason. We need to do more profiling to find why it's not as great on NV.
 
Is he talking performance delta compared to NVIDIA? Because that is not really surprising when you're already as efficient as NVIDIA hardware is as we've seen in previous performance reviews. I guess we need the slides. Either way, I would be ecstatic for Vulkan to gain more traction in the AAA space. The dream, of course, is to no longer be chained to Windows with DirectX.
 
Well AMD would need every advantage it can get just to barely remain above water, too bad they're still absolutely nowhere near anything like competitive at all, Vega is an absolute flop of a failure and the clearly terrible FC5 results show their fastest offering getting thrashed even by nvida's ancient #2 card even though everybody knows FC5 is an AMD game. Goes to show that AMD is just a foolish purchase. Their stock should be worth $0.00.

(wow, I feel REALLY dirty after that. How do some of you do that every day?)
 
And again, the difference is negligible and put to a case by case basis. Opinions are fun!

I could see a gap opening up down the line from Nvidia's ability to produce multiple gpu variants, which, cost permitting, could put tesla type cards in developers boxes, speeding up production, thus giving trickle down benefits to the GTX lineup wearing the same clothes. As long as Nvidia is able to maintain advantages in raw power, power consumping, performance/watt, and heat, consumers won't be moving the needle much.

Regardless, it tends to get lost in the noise that AMD really do have extremely capable products that they tend to stretch a little too thin. Some of the efficiency numbers for VEGA with voltage turned down can be eye opening. They just didn't have the top end to compete in efficiency. If we all wanted to be honest with ourselves, If you take away the 1080ti lead in 4k gaming, anything under that resolution makes AMD a fairly even choice across all price points (in a normal MSRP world).

While AMD could put together something special in the next few years to reclaim the performance crown, I'm really hesitant to accept that will be their direction. The partnership with intel will bridge the integrated gpu gap with mid range discreet, but that could be all. AMD needs a player in the AI/Learning/AV fields, and with the innability to develop several GPU variants in tandem with the specificity of Nvidia, their gaming offerings may be relegated to hi-midrange in the future as well.
 
Because they didnt use that performance enhancing trick we call Spectre? ;)
 
Anyone that thinks Vega can't compete (esp under Vulkan) is ignoring Wolfenstein II benchmarks.
 
I have a theory, that ATI/AMD has for a long time had technology that would allow it to be more competitive to NVIDIA, but that the games were not programmed to take advantage of that technology for whatever reason. NVIDIA has developed their GPU's to be the best using the current game technology/games developed primarily to take advantage on the technology that NVIDIA was superior. With Vulkan and to a lesser extent DX12, we are seeing games take advantage of technology that AMD/ATI has been promoting, and NVIDIA has been ignoring.
 
Well AMD would need every advantage it can get just to barely remain above water, too bad they're still absolutely nowhere near anything like competitive at all, Vega is an absolute flop of a failure and the clearly terrible FC5 results show their fastest offering getting thrashed even by nvida's ancient #2 card even though everybody knows FC5 is an AMD game. Goes to show that AMD is just a foolish purchase. Their stock should be worth $0.00.

(wow, I feel REALLY dirty after that. How do some of you do that every day?)

the noobie tag for your name (via amount of posts you have made) suits you very well indeed.

competitive is more than just absolute performance IMO, quality of drivers, quality of the finished product, quality of the design, quality of the various parts used on the product in question (as well as what has been done to make the technology possible in the first place) for my money, Ati and AMD have done far more "good" for the graphics card landscape than Nv have done (who act like a trashy bully with a leather coat with a hair pick)

Mantle is AMD's "baby" derived from and built upon components of AMD's Mantle API, which was donated by AMD to Khronos with the intent of giving Khronos a foundation on which to begin developing a low-level API that they could standardize across the industry, much like OpenGL. (not for profit, which AMD gave Nv the chance as well as Intel and MSFT right off the bat to be a part of all of whom "thumbed their nose at AMD, no surprise there)

the only reason why a chunk of the advanced whatever is in DX12 (some not all, same with DX11 same with DX10) is because of what AMD wanted to and was putting into (and prior to, let alone 64bit computing as we know it) Mantle, and why Nv does not do "as well" because IMO they do not or did not see a way to tilt the tables in their favor and make massive profits making others look "slow" as they can with DX, it becomes more of a "level playing field"

if you want the same raw performance (in the same fashion without screwing around with the code) you need the same raw parts, which means you chew up the same raw power and dish out the same raw heat, which also means it costs the same raw $$$ to make it happen, oh and it would also mean ceding control to an AMD derived API...Nv does not want any of that, so likely they will only ever support "some of it" at most.

anyways, FC5 as you "point out" is but 1 game, hardly substantial proof that 1 card that is more of a "compute" card is a "flop" you are comparing in your example a pure gaming focused card (Nv) to a compute (gaming ability card) AMD, not exactly a proper comparison now is it...just as you feel their valuation should be $0.00 (which is asinine at best) my valuation on your "opinion" of AMD in this very limited example holds as much weight as a bag of dust ^.^
 
People forget Blender is as good a game as any triple A title coming out this year or in the next ten years and already features ray tracing.
 
I like the the Vega 64 . it was positioned against the 1080 and it did well.

The TI cards are meant to crush and they do.

I was going to buy a V64 originally but they were back ordered.
I was like ok, bought a 1080 instead. Then the week after I bought it I realized I could step up to Ti and I traded up (evga) for a 100$ difference.

My brother has a V64 and we both like how our rigs perform.

I really dont think on the daily I can NOTICE a difference side by side.

*Edited so people won't die of judgement because some of us exaggerate. They were mad about my recollection of events. I don't even remember when I had back surgery but it was before the 1080 came out.

I hope they stay competitive so both keep pushing the boundaries.

Go team games!
 
Last edited:
Even in sports you don't hope a team quits and disbanded because the games would suck with only 2 teams.

I really wish 1 or 2 other gpu players could enter the gaming arena.
 
Is he talking performance delta compared to NVIDIA? Because that is not really surprising when you're already as efficient as NVIDIA hardware is as we've seen in previous performance reviews. I guess we need the slides. Either way, I would be ecstatic for Vulkan to gain more traction in the AAA space. The dream, of course, is to no longer be chained to Windows with DirectX.

but is Nv really that "efficient" IMO sure they are quite awesome in their own way, but, they also are not as "fleshed out" as they once were, what I mean is the older Nv Geforce were more of a sledgehammer type design whereas their current generation ones (seem to me) to be more of a stripped down race car approach (more and more as the years went by) AMD chose to keep them more of a "sledgehammer" for many generations.

with Nv cards now more then ever they seem to "need" the fancy tricks they can do within DX or within their cards circuitry to emulate/trick certain features out of the hardware/software vs just having it on the card outright, whereas AMD chose to go the other route and maybe be a bit less efficient (in absolute terms) by cramming everything they could as best they can while not always beneficial truly shows how much oomph they have available (VLIW5, VLIW4, GCN) in that even at a "lower clock speed" they still manage to "keep up" subjectively quite well even with the clock speed/power consumed deficit.

more of everything to do everything takes power after all, also takes the "code" to leverage this benefit, I think Mantle was AMD way of allowing devs and such to be able to take advantage more easily of what is "in the hardware" than they once were able to...another analogy I suppose, you can have a fast 1/4 mile car (Nv) or a full Le Mans car (AMD) but is next to impossible to have a car tuned to run both "styles" just as well. it takes that much more tuning profiles to take advantage of all that a Le Mans track has to offer than the 1/4 mile racer ^.^
 
Even in sports you don't hope a team quits and disbanded because the games would suck with only 2 teams.

I really wish 1 or 2 other gpu players could enter the gaming arena.

used to be many many companies that made gpu for desktop (windows) but the cost to do so, the man hours required, the revenue made from and required to compete makes it likely not possible..let alone all the patents and such held by the few behemoths in the industry of which there used to be thousands is likely at best only a hundred or so (being bankrupted, bought out etc)

seems likely we will only have 3 "players" in the race (for performance desktop graphics cards and CPU)
Intel (who now have a viable "maybe" graphics card arm)
AMD (who of course does CPU as well as GPU)
Nvidia (who only does basically GPU)
(when saying windows specifically it requires x86 essentially which is Intel property x64 not required but foolish to not have which is AMD license, no x86 licence, no windows, exception recently has been with Arm on Windows but that is completely a different ball game)

Via holds a licence but has not made a competitive cpu for windows in a very long time)

for Linux there are other players such as Matrox and such, but, they (to my understanding) simply cannot compete directly with Intel or AMD or Nvidia for anything but the "baseline"

smartphone graphics there are many more players but for how fancy they "appear" to be, are very simplistic designs, not meant for AAA extreme setting 4k 1000FPS 100+watt I AM GOD performance.

it sucks, it is what it is, to many big fish eating all the little fish over the years is basically what took place.
 
but is Nv really that "efficient" IMO sure they are quite awesome in their own way, but, they also are not as "fleshed out" as they once were, what I mean is the older Nv Geforce were more of a sledgehammer type design whereas their current generation ones (seem to me) to be more of a stripped down race car approach (more and more as the years went by) AMD chose to keep them more of a "sledgehammer" for many generations.

Nvidia didn't actually change anything until GP102.

What really changed was that instead of AMD's top-end competing with the Gx100, which has always been a 'fleshed out' GPU, AMD's performance has dropped so far behind that they can only occasionally keep up with Nvidia's Gx104 GPU's. The Gx104 has always been 'stripped down' in that they lacked the compute muscle of the big chips that games simply never use, to this day.

Starting with the GTX680, skipping the GTX780 (which was the big chip of the same core), and picking up again with the GTX980 and GTX1080, the Gx104 core has been the top-end mainstream SKU. The GP102 marked a change: Nvidia started building a big, 'stripped-down' GPU for games and those emerging GPU applications that do not use the higher precision capabilities of the Gx100 GPUs, and it's even faster for gaming in its top iteration than said 'compute-focused' GPUs.

If AMD were to build 'slimmer' GPUs where they could get more lower-precision compute units in the same die space, they'd likely be less uncompetitive than they are today.
 
AMD cards are technically more powerful than their Nvidia competitors (eg. a Vega 56 is closer to a 1080Ti than anything else), they just aren't used as efficiently to generate FPS. It's not really surprising to find that AMD cards might be able to punch well above their typical gaming weight class in some workloads, it's mostly just disappointing that they don't do it more often.
 
AMD driver programmers are more "by the book" or "do it the right way" than nvidia's "make it work" or "bandaid fixes". This also applies to game Developers where AMD will tell the developer they need to write their game correctly compared with nvidia who will just patch a driver to bandaid fix bad game code. This is why a lot of games run better on nvidia first, because they have more programmers to implement bandaids in their drivers until a game dev can optimize correctly. I think PUBG is the perfect example where it ran better on nvidia for awhile until the PUBG devs optimized for AMD.
 
AMD cards are technically more powerful than their Nvidia competitors (eg. a Vega 56 is closer to a 1080Ti than anything else), they just aren't used as efficiently to generate FPS. It's not really surprising to find that AMD cards might be able to punch well above their typical gaming weight class in some workloads, it's mostly just disappointing that they don't do it more often.

AMD cards are more 'technically powerful' because they include a lot of hardware that simply isn't used by games, but may be useful for various compute workloads. They also like to jump the gun on technologies that typically come in more fleshed out versions with subsequent DX releases. Makes for good marketing, doesn't make for efficient gaming cards, and doesn't really help AMD's position in the gaming market.

AMD driver programmers are more "by the book" or "do it the right way" than nvidia's "make it work" or "bandaid fixes". This also applies to game Developers where AMD will tell the developer they need to write their game correctly compared with nvidia who will just patch a driver to bandaid fix bad game code. This is why a lot of games run better on nvidia first, because they have more programmers to implement bandaids in their drivers until a game dev can optimize correctly. I think PUBG is the perfect example where it ran better on nvidia for awhile until the PUBG devs optimized for AMD.

Going to need a lot of citations for that one.
 
Well AMD would need every advantage it can get just to barely remain above water, too bad they're still absolutely nowhere near anything like competitive at all, Vega is an absolute flop of a failure and the clearly terrible FC5 results show their fastest offering getting thrashed even by nvida's ancient #2 card even though everybody knows FC5 is an AMD game. Goes to show that AMD is just a foolish purchase. Their stock should be worth $0.00.

(wow, I feel REALLY dirty after that. How do some of you do that every day?)


I had 425 shares of Commodork, and it did hit zero.

Perhaps it explains why you aren't comfortable with doom and gloom scenarios :ROFLMAO:
 
Anyone know where I can buy bulk popcorn?

I wonder what evolutionary pressure or need created the 'defect' in human psychology that leads to FanBoi-ism.

Is it like Sickle Cell? Does it have a hidden survival advantage for carriers?

Dont NV and AMD have different crypto currencies they are better at crunching away?
 
Dear Epic Games,

It has come to our attention that you have mentioned AMD in one of your forum posts. This violates the Games for Nvidia Gaming Partnership for the Freedom of Games agreement you signed. Please remove all mention of AMD hardware. You can refer them as as "those inferior cards."

Sincerely,
nVidia Partner Outreach



It's satire, but let's be honest, it wouldn't shock us.
 
AMD driver programmers are more "by the book" or "do it the right way" than nvidia's "make it work" or "bandaid fixes". This also applies to game Developers where AMD will tell the developer they need to write their game correctly compared with nvidia who will just patch a driver to bandaid fix bad game code. This is why a lot of games run better on nvidia first, because they have more programmers to implement bandaids in their drivers until a game dev can optimize correctly. I think PUBG is the perfect example where it ran better on nvidia for awhile until the PUBG devs optimized for AMD.

It goes deeper then that. NV has taken pages out of both Intel and MS play books. From MS they have learned that the best way to ensure dominance is to control the format. Hairworks, Physx and all the other proprietary APIs NV pushes on the software industry are to ensure NV lock in... not superior features or performance. Now the big hype is Ray tracing and more BS NV and MS lockins... never mind that Vulcan and AMD have already been doing real time ray tracing for over a year.

In the pro world where AMD supports opencl and NV pushes their Cuda junk.... NV has been forced to try and implement opencl properly as many pro software companies aren't as willing to play their games. Sadly the gaming industry has mostly gotten roped in... I mean NV offers a ton of support which is as good as cash when your biggest expense is code monkeys.
 
The problem is this. I need it to work, and I don't need it to crash all the time.

As a customer and a user, I do not buy my products on promises knowing that I am going to consistently have poor performance or stability out of new titles.

The moral high ground doesn't cut it that way, not for me. I have $1500 in monitors alone, I did not spend what I have on my equipment to sign up for troubles I don't need and a few hundred more for a more trustworthy experience is fine by me.

I don't care if the facts point to NVidia taking short-cuts and AMD doing things "the right way". Hell, if they both did it the right way I'd never get past the tutorials.

At the same time, if NVidia ever produces a flat out fast product that is running on less power, I'm on it. I recognize superior engineering when I see it. It's just that they last time I actually saw that from AMD it was about 10 years ago.
 
I was going to buy a V64 originally but they were back ordered.
I was like ok, bought a 1080 instead. Then the week after the Ti launched and I traded up (evga) for a 100$ difference.
Uhmmmm, your time-line? No Vega of any type existed when the 1080ti launched. Not even close.
So what are you playing at?
 
It goes deeper then that. NV has taken pages out of both Intel and MS play books. From MS they have learned that the best way to ensure dominance is to control the format. Hairworks, Physx and all the other proprietary APIs NV pushes on the software industry are to ensure NV lock in... not superior features or performance. Now the big hype is Ray tracing and more BS NV and MS lockins... never mind that Vulcan and AMD have already been doing real time ray tracing for over a year.

In the pro world where AMD supports opencl and NV pushes their Cuda junk.... NV has been forced to try and implement opencl properly as many pro software companies aren't as willing to play their games. Sadly the gaming industry has mostly gotten roped in... I mean NV offers a ton of support which is as good as cash when your biggest expense is code monkeys.

First off, NVIDIA (and AMD if they want) are free to develop their own APIs. Take Hairworks; no one's bothered to come up with anything better. Same with PhysX (which I note is an open API; AMD is free to use it if they so desire). Likewise, MSFT added Ray Tracing extensions to DX12, and NVIDIA is proving an optimized way to utilize it without the developer having to do it themselves.

On the compute front, CUDA is far superior to OpenCL in both structure and performance. The main problem is open APIs generally lose performance because they have to be abstracted to the point where it can support everyone. Having a solution that works for one hardware design offers superior performance and a more usable API. (Basically, see all the criticisms of OpenGL for reference).
 
AMD cards are more 'technically powerful' because they include a lot of hardware that simply isn't used by games, but may be useful for various compute workloads. They also like to jump the gun on technologies that typically come in more fleshed out versions with subsequent DX releases. Makes for good marketing, doesn't make for efficient gaming cards, and doesn't really help AMD's position in the gaming market.

Case in point, going back to the ATI days ATI/AMD cards have generally had superior memory bandwidth. The problem is ever since PCI-E came around, pure memory bandwidth has been a secondary concern to pure shader performance. There's a handful of game engines that are more sensitive to memory bandwidth (generally anything that uses deferred rendering), but nowadays there's been a move to more shader-intensive engines, which historically has favored NVIDIAs GPU designs over AMDs.
 
It also seems like pixel rates on NV cards are always higher.. i'm not sure how that correlates to gaming performance but i seem to remember the 5970 having better pixel rate than even the gtx 580, and the 5970 was a year older. That was a great card.
 
the noobie tag for your name (via amount of posts you have made) suits you very well indeed.

competitive is more than just absolute performance IMO, quality of drivers, quality of the finished product, quality of the design, quality of the various parts used on the product in question (as well as what has been done to make the technology possible in the first place) for my money, Ati and AMD have done far more "good" for the graphics card landscape than Nv have done (who act like a trashy bully with a leather coat with a hair pick)

Mantle is AMD's "baby" derived from and built upon components of AMD's Mantle API, which was donated by AMD to Khronos with the intent of giving Khronos a foundation on which to begin developing a low-level API that they could standardize across the industry, much like OpenGL. (not for profit, which AMD gave Nv the chance as well as Intel and MSFT right off the bat to be a part of all of whom "thumbed their nose at AMD, no surprise there)

the only reason why a chunk of the advanced whatever is in DX12 (some not all, same with DX11 same with DX10) is because of what AMD wanted to and was putting into (and prior to, let alone 64bit computing as we know it) Mantle, and why Nv does not do "as well" because IMO they do not or did not see a way to tilt the tables in their favor and make massive profits making others look "slow" as they can with DX, it becomes more of a "level playing field"

if you want the same raw performance (in the same fashion without screwing around with the code) you need the same raw parts, which means you chew up the same raw power and dish out the same raw heat, which also means it costs the same raw $$$ to make it happen, oh and it would also mean ceding control to an AMD derived API...Nv does not want any of that, so likely they will only ever support "some of it" at most.

anyways, FC5 as you "point out" is but 1 game, hardly substantial proof that 1 card that is more of a "compute" card is a "flop" you are comparing in your example a pure gaming focused card (Nv) to a compute (gaming ability card) AMD, not exactly a proper comparison now is it...just as you feel their valuation should be $0.00 (which is asinine at best) my valuation on your "opinion" of AMD in this very limited example holds as much weight as a bag of dust ^.^

I had 425 shares of Commodork, and it did hit zero.

Perhaps it explains why you aren't comfortable with doom and gloom scenarios :ROFLMAO:

sigh. Guys. That bit in parenthesis. That was the /s. The tell. It was a joke. I tried to be as cartoonishly silly about it as possible, but I guess that's how YOU GUYS ACTUALLY TALK to one another these days so whatever, I don't think I'm the diaper hat here.

And FYI scooter, I've been visitng [H] since 1998, and bought a BX133-RAID off of Kyle's review. Wish I had seen it before I bought a Jetway mobo first.

But yeah, judging by how long someone's been a FORUM member is fucking stupid. Unbelievable how cocky some of you are about things you don't know shit about.
 
sigh. Guys. That bit in parenthesis. That was the /s. The tell. It was a joke. I tried to be as cartoonishly silly about it as possible, but I guess that's how YOU GUYS ACTUALLY TALK to one another these days so whatever, I don't think I'm the diaper hat here.

And FYI scooter, I've been visitng [H] since 1998, and bought a BX133-RAID off of Kyle's review. Wish I had seen it before I bought a Jetway mobo first.

But yeah, judging by how long someone's been a FORUM member is fucking stupid. Unbelievable how cocky some of you are about things you don't know shit about.

Shut-up noob... bwahahaha. Just kidding. I have over 1,000 messages on the forums.. and um.. im just a dumb old Marine.. but according to the forums.. im a genius Bwahaha
 
Uhmmmm, your time-line? No Vega of any type existed when the 1080ti launched. Not even close.
So what are you playing at?
Exaggerated obviously. I couldn't get my hands on a vega but could grab up a 1080 easily. Found out I could step up and did.
I'll go back and edit my op so you don't blow a vessel.
 
Wasn't Vulkan derived from Mantel?
Doom ran great on Vulkan, and it runs excellent on RetroArch also, I hope more games start to use it.
Tis true.
If anyone remembers recent history there were people at AMD that decried DirectX publicly and was bold enough to say it was holding back game development.
Of course the talking heads and Microsoft acolytes criticized them greatly. Mantle was developed to prove this point. (which it did). It was then given to the Chronos group to develop and they produced Vulkan; a cross platform graphics API.
 
Thank you Capt Obvious from Unreal Games, thanks to you.
The genesis of how Vulkan came about is an interesting chain of events. But long before AMD started in on the issue with DirectX I remember John Cormack speaking (via a web conference) about what a PITA it is writing games using DirectX and that it is legacy coding that needs to be obsolete and replaced by an API that can take advantage of the power of modern GPUs.
 
sigh. Guys. That bit in parenthesis. That was the /s. The tell. It was a joke. I tried to be as cartoonishly silly about it as possible, but I guess that's how YOU GUYS ACTUALLY TALK to one another these days so whatever, I don't think I'm the diaper hat here.

And FYI scooter, I've been visitng [H] since 1998, and bought a BX133-RAID off of Kyle's review. Wish I had seen it before I bought a Jetway mobo first.

But yeah, judging by how long someone's been a FORUM member is fucking stupid. Unbelievable how cocky some of you are about things you don't know shit about.


Relax, sarcasm is hard to get across in writing, no emotion to easily key off of, I for one was teasing you about your nickname but had to add something to make it look like I wasn't just running right off the topic range.

And these days for me have been going on for almost six decades so ......... I still talk like they did in those days.
 
It also seems like pixel rates on NV cards are always higher.. i'm not sure how that correlates to gaming performance but i seem to remember the 5970 having better pixel rate than even the gtx 580, and the 5970 was a year older. That was a great card.

If you remember the fallout from frame-time analysis, running CFX on those generations of cards was basically going backward in performance due to AMD's shittastic frame-pacing.

Yes, I experienced this firsthand.
 
If you remember the fallout from frame-time analysis, running CFX on those generations of cards was basically going backward in performance due to AMD's shittastic frame-pacing.

Yes, I experienced this firsthand.

I remember that, was a lot of buzz going on about it. I didnt have to worry though... could only run a single. As i remember... the dual x2 cards did not suffer from that?
 
I remember that, was a lot of buzz going on about it. I didnt have to worry though... could only run a single. As i remember... the dual x2 cards did not suffer from that?

They did, just they were largely obsolete by the time AMD's halfassedness was empirically proven.
 
Back
Top