Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Currently, CrossFire is better than SLI.
We can add War Thunder to the list of games infected by gameworks. I find this case to be especially interesting because these developers had a decently efficient game engine before.
And that's the worst part, this game was working relatively well and quite optimized before hand. But then... nvidia must've gotten to the devs and... now you can read the forums. To their credit the devs tried to keep it GPU agnostic, but... well... it has not gone well across the board to say the least.
An efficient engine became a huge laggy resource hog mess, even at the lowest settings. Graphics are also arguably worst to boot, looks like you are playing with plasticine tanks and planes with stickers now.
So as a test case; we have a game which worked well, now has Gameworks options introduced and VOILA! You need a hardware upgrade for practically no benefit.
I've read around and while I see complaints of performance issues, we don't know if it's because of GameWorks or other changes. A big change I see with the new patch is Physically based rendering and destructible environment. Both can be pretty big resource hogs, if not tuned.
It's all promoted as gameworks features by nvidia, which unfortunately you can't turn off. If you could turn it off it would be great, but for most of it you can't.
The destructible environment so far doesn't mean much as far as I've play tested to what it was before.
So how long are AMD GPU owners going to have to wait for said drivers? NVIDIA had fallout 4 optimized drivers out on the 9th.
But you fail to mention no sli and they had likely access to the game first? And gamesdontworks and they still couldn't solve that? But yes let's give nvidia a free pass. Jesus, the bias on this board is bullshit. Allahprime1bar! Allah nvidibar!!
Amd users can simply turn tessellation down for a huge performance bump. God rays also for some. The ti doesn't even run solid 60fps on a crappy looking game at 1440, questions should really be directed to the developer.
But thanks everyone for alpha testing, for us patient folks.
And destructable environment (ground in particular) has been acheiveable even on a damn ps2 running red faction...
I thought only WaveWorks was used?
But you fail to mention no sli and they had likely access to the game first? And gamesdontworks and they still couldn't solve that? But yes let's give nvidia a free pass. Jesus, the bias on this board is bullshit. Allahprime1bar! Allah nvidibar!!
Amd users can simply turn tessellation down for a huge performance bump. God rays also for some. The ti doesn't even run solid 60fps on a crappy looking game at 1440, questions should really be directed to the developer.
But thanks everyone for alpha testing, for us patient folks.
And destructable environment (ground in particular) has been acheiveable even on a damn ps2 running red faction...
http://wccftech.com/multi-gpu-nvidia-sli-amd-crossfire-performance-value-comparison/
Still trolling AMD threads, Joker? You're another person besides Wreckage/PRIME1 who needs to be permanently banned from the AMD Flavor forum. Why don't you go play with your Titan X SLI setup instead?
Poor Joker...
Him and sprayingmango used to hate Nvidia but are now their best lovers. I wonder what happened...
I think the issue most have with it is the FuryX to 980Ti performance gap. It in absolutely no way mirrored expected results from even previous poor performing games.
Hahaha i dont think he will came back for more... LOL
Him and sprayingmango used to hate Nvidia but are now their best lovers. I wonder what happened...
^^^ Wow that post is crazy haha. I would never have guessed that.
Anyways I am running the game on a borrowed MSI390OC and it runs great at VSR 3200x1800 with godrays turned down a bit. I am one of the rare people that doesn't mind a game like this running at a steady 35fps. If I turn it down to 1440p with all affects on it still runs great in my opinion.
Regardless of my thoughts on performance I do have two things to say.
I feel like this Gameworks fiasco has gotten so out of hand. Sure at one time we saw some questionable overuse of features that hurt AMD, that I agree with. Still all of you guys cannot act like any and every developer that uses Gameworks is gimping AMD on purpose, or that every fault in the game is a result of Gameworks.
I mean come the fuck on.
This is Bethesda releasing a game with no focus on the PC at all. Horrid textures, early 2000s level physics interaction, horrid AA, terrible windowed mode (cant use VSR resolutions with windowed mode on), and the list goes on and on. I do feel that game is solid in terms of gameplay but Bethesda really let us all down with how shitty this game looks all these years later. Place the blame with Bethesda, not Nvidia.
Also AMD performance on this game is not shit, just because something is marginally slower doesn't mean its shit. The game plays great on this card I've been borrowing and that's that for me.
Well, per nvidia's video trailer there is also Gameworks Destruction, which IMO is the biggest problem.
Waveworks you can turn off (if I remember right), and really, many maps don't have much water so it's mostly not seen anyways.
This Destruction thing however seems to be killing performance, and can lag it Hard, all for some pretty primitive looking effects IMHO.
Their new graphics engine is not tagged as gameworks "insert effect name here", but per some dev comments and nvidia worked with them on some parts of it at least.
Troll? Which post in this thread of mine is trolling? The only ones trolling here are you, CoolVibrations, Creig, and the others here who can't handle any outside opinions and inject 3rd party forum posts into the thread that have nothing to do with the discussion.
FO4 is heavy on CPU, AMD users are going to suffer more thanks to the driver overhead issues. One of the few games these days with that problem.
How old is that post? Like 10 years ago? Opinions and viewpoints change, that's what intelligent people are capable of doing. I was wrong to say those things about Kyle at the time and clearly my views about Hard|OCP were incorrect---I attribute it to being young and stupid. All this proves is that I'm capable of looking at AMD and NVIDIA objectively. In fact, I used to advocate for Team Red quite a bit back in the day as I believed they were technologically ahead of the curve and were on point (e.g. 9700 pro/9800) but then AMD took over and well, they hit a spiral of failure to which there's no pulling out.
Edit: Found the post, almost 13 years ago. **Must have really dug deep to find that, pathetic.**
I have yet to see any performance analysis comparing God Rays with low tessellation (4x or 8x) vs high (16x +) so I won't comment on how its impacting AMD. There's no proof there whatsoever. I'm not saying it's untrue, though. The game is a CPU hog and whenever that happens, AMD GPUs are going to suffer. So yes it's a DX11 problem. When modern Intel CPUs are bottlenecking Nvidia GPUs, that effect is going to be significantly worse for AMD owners.Granted most of the uproar will be moot after a patch/driver. But in no way is the reason for the discrepancy solely a dx11 issue. Just have to look at the games as a whole to see how poorly these releases look against the norm. No one here is going to debate the 980TI being in the lead. With GW probably wouldn't feel too bad about 10-15% between the 980TI and FuryX. But a near 40-50% discrepancy isn't in any way reasonable nor should it be condoned. By whatever reason it is an issue that should be getting far more concern. Funny how at the beginning of the year it was a bit down played but with each release thus far the issue has grown to far larger proportions. And now watch so many scramble still peddling the same wares they did before. Its like in Naked Gun when Leslie Neilsons character is standing in front of the exploding fireworks factory exclaiming there is nothing to see here. lol
It isn't really GW that causes the biggest issue, although it doesn't help, closed source and all. It is the apparent lack of time given for drivers before release. TW3 showed that even Nvidia was caught with it pants down in regards to Kepler ( if you believe there was no intent on Maxwell sales initiative). Anno 2205 looks dangerously the same with it release benchmarks. Granted as I stated before after the drivers and patches we will likely see the norm results we expect, but unfortunately it is this poor releasing that is the issue at hand.
I have yet to see any performance analysis comparing God Rays with low tessellation (4x or 8x) vs high (16x +) so I won't comment on how its impacting AMD. There's no proof there whatsoever. I'm not saying it's untrue, though.
The game is a CPU hog and whenever that happens, AMD GPUs are going to suffer. So yes it's a DX11 problem. When modern Intel CPUs are bottlenecking Nvidia GPUs, that effect is going to be significantly worse for AMD owners.
I pity anyone running an AMD CPU + GPU combo in this game. Unless it scales really well on FX-8000 chips.
The problem is with the game dev not communicating what AMD needs for driver development. In extreme examples the game dev needs to patch the game.
I agree that the Devs need to be transparent. If they aren't handing over the info, for whatever reason, or if AMD is dragging their feet would be good to know.
Buying nVidia is the last thing that someone should do in this situation though. If anyone thinks that only having an nVidia install base and no competition will improve anything they don't understand how monopolies work. Or, they work for nVidia. The 2nd is far more likely since the concept of monopolies is simple enough for a 10 year old to understand.
Apple had the monopoly on smartphones with the first iPhone. and yet in the current market there are many other players participating.
And with the current GPU/CPU market, nVidia/Intel are having virtual monopoly anyway. They get to dictate where the market heads to next. Not their fault AMD flopped hard. If AMD fails, I'm sure someone else will pick up the slack, maybe by buying out AMD then.
The CPU/GPU market is too lucrative for 1 dominant player. ARM showed you it can happen. Why won't it happen with GPUs? You don't think someone like Qualcomm/Samsung will step in? Or even Microsoft? MS' hardware division has been on fire lately, I'm sure they'd love to get their hands on a CPU+GPU mfg.
Ok there are just too many equating market share to ownership. Market share is just what is sold not in any way an accurate representation of what percentage is being Used. Keep in mind for the first half of this year AMD sold nothing into the channel therefore part of the market numbers were 0 because AMD released nothing to AIB partners. Therefore all sales were just depleting stock already in the channel. Also keep in mind this isn't solely new ownership, as in never owned before, nor can it be equate to entirely those that moved from AMD to Nvidia, albeit that will be part of the number. This is partly, and likely in great share to those that upgraded from an existing Nvidia product to a new Nvidia product. Seems from my experience and what I have seen Nvidia owners tend to upgrade far more often. Seriously how many posters do you see talking about the 680 they still use. In contrast look at the great number of 7970s posters still use(not talking about 280X, but vanilla 7970).
But that has precious little to do with what you are alluding to. Games aren't made to market share they are far more likely made to ownership. Otherwise would you ever see 5xx series or 6xxx series listed in minimum requirements? And in another forum a great point was made: the sheer number of GCN GREATLY out weighs all Nvidia architectures. How? Consoles! Yes those little boxes skew the numbers greatly in favor of AMD ownership as it pertains to game construction. Also lets not forget that most of these titles have been in the works for greater than a year, up to 3 years a great deal of the time. Therefore Maxwell was not even a consideration for some in the beginning. So then we are left wondering at what point was this consideration made to add these optimizations that show these results and why against the percentages of ownership are these the results we are getting.
Technology wise yes, XDMA offers much better scaling and consistency than the bridge used for SLI. The problem is the software support is hit or miss. I had a very good experience with CrossFire in Battlefield and most other DICE/EA games, but not so good of an experience with Unreal Engine games.Currently, CrossFire is better than SLI.
Do you have any data/proof to back up your conjecture? All I'm seeing is the tired same old tirade parroted over and over. Console advantage is a wash for AMD.
AMD chips were used with the 360, and now PS4 and Xbone, and where is the advantage for AMD? I don't see any performance/feature advantage, nor do I see better time to market/better stability. I just don't see any indication of that being an advantage to AMD. It's been 10 years since the 360 came out. Where's this vaperformance that you speak of? If anything, nVidia's doing better since they left the console market. Funny that.
Also, sales number indicate ownership trends. Sale channels request the amount of stock they request because that's the amount they think they can sell. Of course, it's not gonna be a 1-1 relationship between sales number and actual ownership, but it's great as an indication. If channels stop stocking your stuff, you get 0 sales. Simple as that.
Minimum requirements are not actually indicative of performance. You don't have settings used, you don't have FPS numbers, you have nothing but part names. As far as anybody can tell, they could've made the game run at 1, 10, 15, 20 FPS on the part and throw it out there.
As for your "point" about users with old cards not upgrading, please give me some data from somewhere reputable. Your back side is not one of them.
Seriously you must just like to argue. Steam survey has all the numbers you need. Everything I stated is based on fact, real numbers. Market trends speak only to sales never to ownership percents. Another fact you do not seem to be aware of.
Games aren't made to market share they are far more likely made to ownership. Otherwise would you ever see 5xx series or 6xxx series listed in minimum requirements? And in another forum a great point was made: the sheer number of GCN GREATLY out weighs all Nvidia architectures
The engine is essentially the same as Skyrim, now 3 years old.
Arrrrrrrrrrrgh.
FO4 is heavy on CPU, AMD users are going to suffer more thanks to the driver overhead issues. One of the few games these days with that problem.
So you mean the majority of people buy GPUs to put on shelves? So Intel is the dominant player in the GPU industry? Steam survey, laughable. Yes, that's 1 data point. However if NVIDIA outsells AMD 4-1 over consecutive quarters, those data points are now suddenly worthless because of Steam surveys?
Let me pretend that Steam is an accurate representation of the GPU market. On Steam, NVIDIA ownership is twice that of ATI. 54% vs 28%.
What I don't understand is your logic. On Steam you have 2 NVIDIA GPUs per 1 AMD GPU. On sales you have 4 NVIDIA sales per 1 AMD sale.
So please, can you enlighten me on how the hell did you arrive at the conclusion that there are more GCN GPUs out there compared to NVIDIA, even though both data points you and I showed points to the opposite?
Also, can you name me 1 instance where AMD PC GPUs benefit from consoles? Mantle was not a benefit, it was an 8 million USD gamble that, from the looks of it now, is not paying off. So where is the supposed console benefit?
Also, Steam is very unreliable as a measure of GPU ownership. http://forums.anandtech.com/showthread.php?t=2134147
Highlights of AMD Catalyst™ 15.11.1 Beta Windows Driver
Performance Optimizations
Includes quality and performance optimizations for the following titles:
Star Wars™: Battlefront
Fallout 4
Assassin's Creed® Syndicate
Call of Duty® : Black Ops III
Try to take step back, relax and think logically. GCN is in all consoles now, and their sales are quite large. Hence why I state GCNs number is greater than Maxwell by far and large and still greater than Maxwell and Kepler combined. This is only speaking to numbers not performance or dictating a superior architecture, so no need to go that route.
The 4:1 and 2:1 go so far as to prove what I said. NVIDIA owners tend to upgrade far more frequently. Most of the numbers prove it as well as owners that post in forums. It isn't a contest nor is it a statement of anything more than observable trends.
As I mentioned minute one, I don't care about who's in first, don't deny the 980TI is clearly the winner this release. GW in part doesn't bother me, but does have some traits that all of us should be concerned about. Honestly think there should be Tess sliders in game, thank God AMD has an over ride. The lack of such in TW3 shows a very negative trend developing. Again this should have never been an AMD NVIDIA debate but a dev/gamer one, which includes concern for GWs.
Wait what? How can you infer from the 4:1 and 2:1 trends that nVidia owners tend to upgrade more frequently? It might be that the increase is mostly due to AMD users switching, it might be that ducks suddenly have an interest in gaming. What the hell are you on about? Most of what numbers prove that?
Owners that post in forums only serve to illustrate the enthusiast space, really. You have to care past a certain point to join a forum to post about it.
GCN is in all consoles, great. So then why does every single console port run like dog crap on AMD hardware, even more so than nVidia hardware? Step back, chill and think logically
Highlighted the key words for you there. Consoles all use low level programming and handle their own memory management. For a port you end up with DirectX or OpenGL doing all the scheduling and memory management. That means most if not all of the optimizations get removed. Further, supporting async operations can be a big deal. AMD for example uses a discrete tesselation unit on their GPUs. On a console devs could be rendering one frame while tesselating the next. Maximizing utilization of the hardware. Port it over to a PC and it may turn into synchronous where you are rendering or tesselating. It's my understanding there is a hack to get it to work under DirectX and OpenGL, but that may be difficult if the tesselation is occuring within some binary blob or code block you can't access.GCN is in all consoles, great. So then why does every single console port run like dog crap on AMD hardware, even more so than nVidia hardware? Step back, chill and think logically
So you mean the majority of people buy GPUs to put on shelves? So Intel is the dominant player in the GPU industry? Steam survey, laughable. Yes, that's 1 data point. However if NVIDIA outsells AMD 4-1 over consecutive quarters, those data points are now suddenly worthless because of Steam surveys?
Let me pretend that Steam is an accurate representation of the GPU market. On Steam, NVIDIA ownership is twice that of ATI. 54% vs 28%.
What I don't understand is your logic. On Steam you have 2 NVIDIA GPUs per 1 AMD GPU. On sales you have 4 NVIDIA sales per 1 AMD sale.
So please, can you enlighten me on how the hell did you arrive at the conclusion that there are more GCN GPUs out there compared to NVIDIA, even though both data points you and I showed points to the opposite?
Also, can you name me 1 instance where AMD PC GPUs benefit from consoles? Mantle was not a benefit, it was an 8 million USD gamble that, from the looks of it now, is not paying off. So where is the supposed console benefit?
Also, Steam is very unreliable as a measure of GPU ownership. http://forums.anandtech.com/showthread.php?t=2134147
Highlighted the key words for you there. Consoles all use low level programming and handle their own memory management. For a port you end up with DirectX or OpenGL doing all the scheduling and memory management. That means most if not all of the optimizations get removed. Further, supporting async operations can be a big deal. AMD for example uses a discrete tesselation unit on their GPUs. On a console devs could be rendering one frame while tesselating the next. Maximizing utilization of the hardware. Port it over to a PC and it may turn into synchronous where you are rendering or tesselating. It's my understanding there is a hack to get it to work under DirectX and OpenGL, but that may be difficult if the tesselation is occuring within some binary blob or code block you can't access.
You do realize mantle did pay off, amd finished dx12, vulkan which is coming in mantle taken to the next level. It will be multi platform and the new future standard, mantle brought us asynchronous compute(many ps4 titles use it, some box one, future battlefield games, aots , deus ex and other amd evolved titles and some dx12 games.), mantle brought us split frame rendering, the ability to crossli e.g. Aots crossli'd a fury x(master card since its superior) and nferiors 980ti and it worked swapped the cards performance was worse. This and more is what mantle gave us. Go back to the nvidia forums where you belong.