No AMD 300-series review samples?

Thanks for the link Mango and plus 1 on skynets point of view. I want everyone to have the same great game experience no matter who's card they are running. Open source would also make the games like watchdogs better at release instead of a crapfest on both companies cards. I really have no brand loyalty when it comes to computers it is all about performance for me. It would be awesome to see an AMD revival on the cpu and gpu front IMO because it would push Intel and Nvidia to make even better products available to us.
edit- No wonder I missed that article being from over a year ago
 
Less biased? Or just more biased the other way?

Who might those be? What's your list?



EDIT: All people are biased. We have all had personal experiences that influence us. If there are any reviewers out there who claim they have no biases they are either clueless or bald face liars. Likely the latter.

It's too bad we can't have some double blind testing with the reviewers stating their observations and opinions then take off the wraps and conduct measurements. The reviewers won't do this because they are afraid that they will be shown to be quacks instead of the experts they claim.
 
Last edited:
A lof of letter for saying:
"I' m still gonna use the fallacy of shifting the bruden of proof, even it it is a fallacy and leves me with no valid arguments".

This is funny.
A lot of people whining about what they THINK goes on...but not a single piece of documentation.

I am doing this because I a tired of people posting false claims to bolster their FOTM brand.

Just look at this forum.
People make up stuff about eg. PCper.com...so they have to come here with FACTS....to correct the misinformd and the trolls.
The forum is ladden with false claims, made by peop+le who have no clue.
But the still

So I have tkane the stance, that I don't trust posters making claims without documentation.

So far NO ONE has documented the normal procedure of optimizing a game in the drivers.
Until that is done...the "black box" argument is pure PR FUD.

Playing on peoples ignorance about the finer details on the topic.
So far the only FUD seems to come from you. I detailed the facts behind Proprietary software from a stance of basic principles. You have given no proof or even conjecture that says our stance is wrong, hence you are far more akin to FUD. Seriously if you believe that AMD can easily optimize drivers for a GW feature then you know NOTHING of software and Proprietary fundamentals.
 
If AMD is unable to optimize GameWorks games w/o code access, how were they able to optimize TW3?

And that's only one case. The other case was the huuuuuuuuuge shitstorm about Project Cars that turned about to be completely fabricated nonsense. There were no GW features. There was no GPU PhysX. And the CPU PhysX required relatively little CPU resources. So everybody got allllllllllllllllllllll spun up about big bad evil nvidia trying to screw over AMD and, as it turned out, no... AMD just got caught with their pants down (like usual).


But now everyone is running around thinking that Jen-Hsun is the microchip industry equivalent of Montgomery Burns.
It is blind optimizations. No one said they cant just any optimizations are just trial and error. The Witcher3 drivers from AMD increased performance but the issue would then be what % of perfect optimization where the released at. Its like being blindfolded and being told to put 10 pages in order. There is a chance you get it but far more likely you don't, it then comes down to how close did you get.
 
I want everyone to have the same great game experience no matter who's card they are running. Open source would also make the games like watchdogs better at release instead of a crapfest on both companies cards.

The reality is that even game developers have preferences (e.g. Epic and Codemasters). Those preferences could be due to personal experience, quality of IHV support/engagement or anything that can sway opinion.

If everyone has the exact same experience all the time it means that there is no innovation and nobody is pushing the envelope. We'll all have to cater to the lowest common denominator which is bad for everyone in the long run.

Wouldn't it be nice if AMD's True Audio caught on so their customers could experience better sound in games? Or is it better for nobody to have it just so nVidia doesn't feel left out?
 
So far the only FUD seems to come from you. I detailed the facts behind Proprietary software from a stance of basic principles. You have given no proof or even conjecture that says our stance is wrong, hence you are far more akin to FUD. Seriously if you believe that AMD can easily optimize drivers for a GW feature then you know NOTHING of software and Proprietary fundamentals.


actually they can, the whole thing when code is compiled to machine language the drivers has full access to that, and it can be read and changed at a machine level. How do you think cracker for software do their thing?

Just as an example, dl VS studio, run a program through its debug mode, you can see what is going on at the machine code level (it will go line by line through the code and you can see what is going on), not only this you can profile shaders at the same time to exactly what is going on, with DX profiler or similar software.

https://msdn.microsoft.com/en-us/library/windows/desktop/jj585574%28v=vs.85%29.aspx

https://www.opengl.org/sdk/tools/gDEBugger/

http://developer.amd.com/tools-and-sdks/opencl-zone/codexl/


If you say it can't be done, that means, AMD doesn't have full control of their drivers, because the driver does the shader compiling.

Now shader compiling, shader code is actually in txt open files and compiled at runtime.

Some shaders, engine code has to be modified due to the way render order is or other specifics of the way a renderer works, this is where the proprietary shader software is hidden.

So far no GW shader code has any negative effect to any degree when it comes to the code itself, Hairworks was specific to tessellation performance and tessellation levels, and when dropping tessellation levels its easy to see the difference with hair. And as above AMD does have control over this through their drivers.

It is blind optimizations. No one said they cant just any optimizations are just trial and error. The Witcher3 drivers from AMD increased performance but the issue would then be what % of perfect optimization where the released at. Its like being blindfolded and being told to put 10 pages in order. There is a chance you get it but far more likely you don't, it then comes down to how close did you get.

Tools are available to the general public to see what is going on at a driver level, this is how developers optimize software for IHV hardware without the compiler source code, and with the shader compiler code the IHV's are never blind to this, they know exactly what is going on, this is how even without the code the IHV's can make driver optimizations for games.

You talk about fud but you don't know how things even work when it comes to shaders so who is talking fud?


Now on PCPer. I don't want to get into that conversation because I don't believe they are unbiased, things they have done in the past show that, maybe in the future they will change.....
 
Last edited:
§kynet;1041665242 said:
That is not the same as having access to the source and you know it. But you're ignoring from the developer point of view, they can't optimize at all the code supplied by Nvidia stays just the way it is. I am very against this practice it is obvious that Nvidia will take advantage of this given their position in the market.

You still havn't psoted the delvelopers that ship source code to AMD and NVIDIA for driver optimizations.
Neither has AMD for that matter.

As explained by others in thread, shader are compiled at runtime.
That is how you optimize for the gamecode in your drivers.

Just saying "black box", "black box", "black box", "black box" is really a sign you don't know how graphics cards drivers work.

I bet this is what AMD is counting on...people crying in forums, screaming "black box"...who cares if they are wrong, the point is moot....AMd dosn't it seems.

So can you (or AMD) provide the list of developers shipping sorice code and not binaries?
(Even if have already gotten the fallacy of how optimizations are done on a driver level debunked, I will play this one to the end).


But I bet will will never get a list...or an example.
Care to prove me wrong?

And crying "black box" is not proving me wrong :p
 
So far the only FUD seems to come from you. I detailed the facts behind Proprietary software from a stance of basic principles. You have given no proof or even conjecture that says our stance is wrong, hence you are far more akin to FUD. Seriously if you believe that AMD can easily optimize drivers for a GW feature then you know NOTHING of software and Proprietary fundamentals.

Try reading the thread.
Do you want to bring up the fallcy of "shifting the burden of proof" :rolleyes:

Try reading Razor1's post...I am not the only one in this thread with a real clue to how things are done.

DirectX is proprietary...and closed...Microsoft has not even revealed the source code for all the SDK samples ffs! lol

The SAME thing!!!

But you don't go around crying on forums that:
"DirectX is a black box, AMD cannot omtimize it drivers for DirectX!!!"

Neither does AMD.

Double standards?
Or just PR FUD, as I say it is.

You took the PR bait...sad for you.
But that dosn't make you right...you don't even a valid argument lol :rolleyes:
 
actually they can, the whole thing when code is compiled to machine language the drivers has full access to that, and it can be read and changed at a machine level. How do you think cracker for software do their thing?

Just as an example, dl VS studio, run a program through its debug mode, you can see what is going on at the machine code level (it will go line by line through the code and you can see what is going on), not only this you can profile shaders at the same time to exactly what is going on, with DX profiler or similar software.

https://msdn.microsoft.com/en-us/library/windows/desktop/jj585574%28v=vs.85%29.aspx

https://www.opengl.org/sdk/tools/gDEBugger/

http://developer.amd.com/tools-and-sdks/opencl-zone/codexl/


If you say it can't be done, that means, AMD doesn't have full control of their drivers, because the driver does the shader compiling.

Now shader compiling, shader code is actually in txt open files and compiled at runtime.

Some shaders, engine code has to be modified due to the way render order is or other specifics of the way a renderer works, this is where the proprietary shader software is hidden.

So far no GW shader code has any negative effect to any degree when it comes to the code itself, Hairworks was specific to tessellation performance and tessellation levels, and when dropping tessellation levels its easy to see the difference with hair. And as above AMD does have control over this through their drivers.



Tools are available to the general public to see what is going on at a driver level, this is how developers optimize software for IHV hardware without the compiler source code, and with the shader compiler code the IHV's are never blind to this, they know exactly what is going on, this is how even without the code the IHV's can make driver optimizations for games.

You talk about fud but you don't know how things even work when it comes to shaders so who is talking fud?


Now on PCPer. I don't want to get into that conversation because I don't believe they are unbiased, things they have done in the past show that, maybe in the future they will change.....

Not quite. I suggest you read this:

According to Valve programmer Rich Geldreich, the principle benefit of GameWorks is that it gives Nvidia an opportunity to optimize segments of code that it wouldn’t normally control directly.

“[T]here are fundamental limits to how much perf you can squeeze out of the PC graphics stack when limited to only driver-level optimizations,” Geldreich told ExtremeTech. “The PC driver devs are stuck near the very end of the graphics pipeline, and by the time the GL or D3D call stream gets to them there’s not a whole lot they can safely, sanely, and sustainably do to manipulate the callstream for better perf. Comparatively, the gains you can get by optimizing at the top or middle of the graphics pipeline (vs. the very end, inside the driver) are much larger.”

http://www.extremetech.com/gaming/1...elopers-weigh-in-on-the-gameworks-controversy

It says exactly what I was alluding to and what you and Factum keep leaving out just to appear correct. By the way that was likely the best article I have seen on the subject. As I stated, it can be optimized but because of the inability to see the code they are just shooting in the dark somewhat when it comes to 100% optimization, this doesn't mean no optimization.

Seems either you were unaware/not-well-versed or intentionally misleading. My experience with this particular subject is rather limited and fairly new, information to the science is hard to come by with what little time I have. But seems even I had a better grasp of the situation by using just rationale and critical thinking.

FOR OTHER READERS: In that article on page 3 is a good diagram of the process of the game pipeline. Really a good and informative article which I agree with all points nearly 100%.
 
Try reading the thread.
Do you want to bring up the fallcy of "shifting the burden of proof" :rolleyes:

Try reading Razor1's post...I am not the only one in this thread with a real clue to how things are done.

DirectX is proprietary...and closed...Microsoft has not even revealed the source code for all the SDK samples ffs! lol

The SAME thing!!!

But you don't go around crying on forums that:
"DirectX is a black box, AMD cannot omtimize it drivers for DirectX!!!"

Neither does AMD.

Double standards?
Or just PR FUD, as I say it is.

You took the PR bait...sad for you.
But that dosn't make you right...you don't even a valid argument lol :rolleyes:


Read my post above this for the proof and facts of the situation.
 
Not quite. I suggest you read this:

According to Valve programmer Rich Geldreich, the principle benefit of GameWorks is that it gives Nvidia an opportunity to optimize segments of code that it wouldn’t normally control directly.

“[T]here are fundamental limits to how much perf you can squeeze out of the PC graphics stack when limited to only driver-level optimizations,” Geldreich told ExtremeTech. “The PC driver devs are stuck near the very end of the graphics pipeline, and by the time the GL or D3D call stream gets to them there’s not a whole lot they can safely, sanely, and sustainably do to manipulate the callstream for better perf. Comparatively, the gains you can get by optimizing at the top or middle of the graphics pipeline (vs. the very end, inside the driver) are much larger.”

http://www.extremetech.com/gaming/1...elopers-weigh-in-on-the-gameworks-controversy

It says exactly what I was alluding to and what you and Factum keep leaving out just to appear correct. By the way that was likely the best article I have seen on the subject. As I stated, it can be optimized but because of the inability to see the code they are just shooting in the dark somewhat when it comes to 100% optimization, this doesn't mean no optimization.

Seems either you were unaware/not-well-versed or intentionally misleading. My experience with this particular subject is rather limited and fairly new, information to the science is hard to come by with what little time I have. But seems even I had a better grasp of the situation by using just rationale and critical thinking.

FOR OTHER READERS: In that article on page 3 is a good diagram of the process of the game pipeline. Really a good and informative article which I agree with all points nearly 100%.

That is why I have stated the renderer is the part of where things can go wrong, not the shaders themselves......

I think you have pay more attention to what you are reading.... the proof and facts are being taken out of context. Reading one article and basing things of bits and parts of it isn't the way to "I know what is going on" and "everyone else is talking fud".

Shaders are compiled at runtime,

http://www.google.com/url?sa=t&rct=...y4CwBw&usg=AFQjCNFWALhPRnOMrTqcKNV8R266u5Ny3w

This is a good starting point for understanding how shaders are compiled * there off line is what I'm saying as runtime, as the application loads up. Their runtime is actually when the application is running.

The first stage of shader compilation, the code is made into D3D ASM, this code is readable, and not that hard to change this is straight forward assembly language. The final part is the ASM code is then compiled by the driver for the hardware, this is also changeable and actually this is where IHV's introduce many of the driver optimizations. All of this happens well before entering the graphics pipeline. That picture they used is not actually that accurate at least to what they were talking about.

Granted it would be easier to do the changes with the source code, but when they don't' have it, they can still do many things.

why do you think it takes so long to load up many games, if you ever load up a game turn on the console and set to debug mode and see what the game is doing when its loading up a level. Sometimes precompiled shaders are used when a device doesn't have enough memory but that's a whole difference group of devices.
 
Last edited:
That is why I have stated the renderer is the part of where things can go wrong, not the shaders themselves......

I think you have pay more attention to what you are reading.... the proof and facts are being taken out of context. Reading one article and basing things of bits and parts of it isn't the way to "I know what is going on" and "everyone else is talking fud".

Shaders are compiled at runtime,

http://www.google.com/url?sa=t&rct=...y4CwBw&usg=AFQjCNFWALhPRnOMrTqcKNV8R266u5Ny3w

This is a good starting point for understanding how shaders are compiled * there off line is what I'm saying as runtime, as the application loads up.

The first stage of shader compilation, the code is made into D3D ASM, this code is readable, and not that hard to change. The final part is the the ASM code is then compiled by the driver for the hardware, this is also changeable and actually this is where IHV's introduce many of the driver optimizations.

Granted it would be easier to do the changes with the source code, but when they don't' have it, they can still do many things.

why do you think it takes so long to load up many games, if you ever load up a game turn on the console and set to debug mode and see what the game is doing when its loading up a level. Sometimes precompiled shaders are used when a device doesn't have enough memory but that's a whole difference group of devices.

First, stop linking things I have to dl, ain't gonna happen. Second I have no reason to believe you over software engineers, who at best seem quite impartial. There is concern and I agree with that assertion. But also like I said and they concur, GW isn't inherently evil but does allow for the accusations.
 
First, stop linking things I have to dl, ain't gonna happen. Second I have no reason to believe you over software engineers, who at best seem quite impartial. There is concern and I agree with that assertion. But also like I said and they concur, GW isn't inherently evil but does allow for the accusations.


lol what you don't have a PPT plugin for your browser sorry not my fault.

I agreed with what he said lol, read fist sentence in my response.
 
First, stop linking things I have to dl, ain't gonna happen. Second I have no reason to believe you over software engineers, who at best seem quite impartial. There is concern and I agree with that assertion. But also like I said and they concur, GW isn't inherently evil but does allow for the accusations.

You try and win a debate with "Shame on you, I don't have powerpoint, thus I am right?! :eek:

You didn't counter a single point...this is what I have been talking about for a long time...FUD vs facts.

lol what you don't have a PPT plugin for your browser sorry not my fault.

I agreed with what he said lol, read fist sentence in my response.

He is not interested in facts it seems...just parroting PR and cherry picking to suit his bias confirmation....facts be damned!!!
 
You still havn't psoted the delvelopers that ship source code to AMD and NVIDIA for driver optimizations.
Neither has AMD for that matter.

As explained by others in thread, shader are compiled at runtime.
That is how you optimize for the gamecode in your drivers.

Just saying "black box", "black box", "black box", "black box" is really a sign you don't know how graphics cards drivers work.

I bet this is what AMD is counting on...people crying in forums, screaming "black box"...who cares if they are wrong, the point is moot....AMd dosn't it seems.

So can you (or AMD) provide the list of developers shipping sorice code and not binaries?
(Even if have already gotten the fallacy of how optimizations are done on a driver level debunked, I will play this one to the end).


But I bet will will never get a list...or an example.
Care to prove me wrong?

And crying "black box" is not proving me wrong :p

The problem, as I understood it, wasn't a source code level problem at all, it was nVidia's license agreements preventing AMD from receiving the game in a timely manner while nVidia was consulting on the game. AMD would receive infrequent snapshots at best and they wouldn't include GW enhancements enabled or with their final configurations.

The next issue is that nVidia would tune GW features in a way intended to specifically hinder AMD cards (as well as older gen nVidia cards) so their latest generation looked good in reviews.

I do not know how much truth their is to any of this, but this was the gist of the explanation given to me by an old high school friend who writes game code for some small company in Austin. And he loves nVidia... go figure. :confused:
 
It may be encrypted/obscured in some way to prevent someone from easily copying it. But the shader assembly code will be eventually sent to AMD's drivers for compilation and it can be pulled at that point. Again, not as easy as reading HLSL but AMD definitely knows what the code is doing.

The shader code isn't where the problems are. It simply nVidia's excessive use of tessellation. Have you seen the Crysis 2 tessellation trickery?

It does nothing... except slow down AMD cards more than nVidia cards.:eek:
 
The problem, as I understood it, wasn't a source code level problem at all, it was nVidia's license agreements preventing AMD from receiving the game in a timely manner while nVidia was consulting on the game. AMD would receive infrequent snapshots at best and they wouldn't include GW enhancements enabled or with their final configurations.

The next issue is that nVidia would tune GW features in a way intended to specifically hinder AMD cards (as well as older gen nVidia cards) so their latest generation looked good in reviews.

I do not know how much truth their is to any of this, but this was the gist of the explanation given to me by an old high school friend who writes game code for some small company in Austin. And he loves nVidia... go figure. :confused:

Now we are getting somewhere.
But this is a far cry from the "black box" FUD that is going around forums.

The first part we can debate on about

After this.
The second part I have to obejct too.
I have seen plenty of FUD about "Keplers planned obsolence".
Again we are talking about forum FUD.

My GK110 has not LOST any performance since launch.
Quite the opposite.
What HAS happend is that NEWER titles put a different load on GPU an there IS a major difference between "Kepler" and "Maxwell" in the hardware:
maxwell_vs_kepler_power_efficiency.png


"Maxwell" is finer grained.
It not like all of the sudden "kepler" started slowing down due to eveil things done by NVIDIA in the drivers.
(Even if you read the foums that is the notion the uninformed are crying all over forums)

It simply a question about that "Maxwell" doing what NVIDIA has been doing since the G80...evolve their architechture over previoues version.

But again we are back at this point:

Uninformed user making far too much noise on forums with FUD (due to ignorance) and drowning the technical debates.
Quite sad to observe.
 
I have seen plenty of FUD about "Keplers planned obsolence".
Again we are talking about forum FUD.

My GK110 has not LOST any performance since launch.
Quite the opposite.
What HAS happend is that NEWER titles put a different load on GPU an there IS a major difference between "Kepler" and "Maxwell" in the hardware:
maxwell_vs_kepler_power_efficiency.png


"Maxwell" is finer grained.
It not like all of the sudden "kepler" started slowing down due to eveil things done by NVIDIA in the drivers.
(Even if you read the foums that is the notion the uninformed are crying all over forums)

It simply a question about that "Maxwell" doing what NVIDIA has been doing since the G80...evolve their architechture over previoues version.
.

Yes yes Maxwell is a better architecture bur how the fuck is a 780 Ti more or less in same level as 290x in Witcher 3 when previously it murdered that card in other games? Does the old ass 290x also have something that makes it better than a Kepler what we dont know about?
 
Yes yes Maxwell is a better architecture bur how the fuck is a 780 Ti more or less in same level as 290x in Witcher 3 when previously it murdered that card in other games? Does the old ass 290x also have something that makes it better than a Kepler what we dont know about?

Just shows you how much more advanced AMD technology is that they can catch and supersede performance tiers with Nvidia cards this late in the product cycle. It is amazing what over engineering your cards will do in the long run. I suspect that they will get even faster in a few more hours.

Kudos to AMD for putting such a forward thinking product.
 
The shader code isn't where the problems are. It simply nVidia's excessive use of tessellation. Have you seen the Crysis 2 tessellation trickery?

It does nothing... except slow down AMD cards more than nVidia cards.:eek:

LOL

This is hillarious, but bear with me

All the ocean simulation is run on a small square and after that tiled out.
So that mesh you are seeing is not true indication of the load of how the games runs.
After that the game-engine does a z-buffer writeback for something that is called "GPU Occlusion Query"
It not rendered.
It's occluded ergo it is NOT drawn.

I would almost call that article dishonest.
Why?
Because they:
- Isolate the mesh
- Render the mesh at maximum tesselation
- Disable occlusion culling
- Disable dynamic tesselation
- Disable adaptive tesselation.

And the present it as "This is how the game works!"

But this is not what the games does.
The claim smells like another PR FUD thing.

It's all documented here:
http://docs.cryengine.com/display/SDKDOC4/Culling+Explained

People should read that...and if they do not understand it...they should stop posting about the topic.
Simple as that.
If you post about stuff you don't know anything about, in a negative manner...you become a FUD'ster.

(Besides I doubt NVIDIA had anything to with that "ocean"...it seems more like something coded by the devs themselfes to be frank.)

So lets see if I can tally the list so far:

"GameWorks is black box and AMD can do nothing!!!" = FUD and false PR
"Planned obsolence for "Kepler"" = FUD and false PR
"Crysis 2 is using to much tesselation because of evil NVIDIA" = FUD and false PR.

Besides AMD PR, you know what else is a common factor in all 3?

Ignorance about the topic.
But that dosn't stop people from screaming all over forums and IMOH making them selfes look like uninformed, rabid PR FUD'sters.

But quite funny that the arguments used to declare NVIDA for "evil" all seem to come from AMD PR...and spread around forums like gospel.

I think it would be a good idea for a "Technical GPU"-sub forum, where people with technical insigth could debate this things, with out all the false claims.

IT is complicated, I know, I have been working with IT in +20 years (coding, hardware, networking)..and there are a LOT of topics I will not enter, because I have insufficient knowlegede about the topic.
But I will read with joy, ask questions and learn.

I will not run around like a braindead parrot, posting crap I don't understand...but I guess people have different standards...
 
Just shows you how much more advanced AMD technology is that they can catch and supersede performance tiers with Nvidia cards this late in the product cycle. It is amazing what over engineering your cards will do in the long run. I suspect that they will get even faster in a few more hours.

Kudos to AMD for putting such a forward thinking product.

AMD has been stuck on the same GCN for the last 2 years so they had no choice BUT to keep optimizing while NVIDIA moved on to Maxwell. When AMD's next top down architecture finally shows up next year, we'll see how much they keep optimizing the 290x/390x.
 
Yes yes Maxwell is a better architecture bur how the fuck is a 780 Ti more or less in same level as 290x in Witcher 3 when previously it murdered that card in other games? Does the old ass 290x also have something that makes it better than a Kepler what we dont know about?

I havn't had time to look in to what TW3 does in the pipeline (And I doubt I will get...I can get away with these "light" posts at work...but start a game up...my boss would be kinda inclined to ask "WFT are you doing?" :D )

But are there any reason that the 290X and the 780 Ti shouldn't be neck and neck?
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/4

I smell another false claim ;)

My bet is that TW3 does MORE than previous games and thus exposing the architechtural differences between "Kepler" and "Maxwell".
 
Just shows you how much more advanced AMD technology is that they can catch and supersede performance tiers with Nvidia cards this late in the product cycle. It is amazing what over engineering your cards will do in the long run. I suspect that they will get even faster in a few more hours.

Kudos to AMD for putting such a forward thinking product.

Thanks for proving my point about certain behaviours on the forum ;)
 
I havn't had time to look in to what TW3 does in the pipeline (And I doubt I will get...I can get away with these "light" posts at work...but start a game up...my boss would be kinda inclined to ask "WFT are you doing?" :D )

But are there any reason that the 290X and the 780 Ti shouldn't be neck and neck?
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/4

I smell another false claim ;)

My bet is that TW3 does MORE than previous games and thus exposing the architechtural differences between "Kepler" and "Maxwell".

For some reason they didn't test the 780 ti. Probably because it would look bad.
2560_1440.gif

Keep in mind that the 290 ran between the 780 and the Titan. In TW3 it's 28% faster than the 780 and 14% faster than Titan. This is a game that is poorly optimized for AMD as well. It's not the only game and it's not false claims.
 
For some reason they didn't test the 780 ti. Probably because it would look bad.
2560_1440.gif

Keep in mind that the 290 ran between the 780 and the Titan. In TW3 it's 28% faster than the 780 and 14% faster than Titan. This is a game that is poorly optimized for AMD as well. It's not the only game and it's not false claims.

What part of post#263 did you not understand.
 
You try and win a debate with "Shame on you, I don't have powerpoint, thus I am right?! :eek:

You didn't counter a single point...this is what I have been talking about for a long time...FUD vs facts.



He is not interested in facts it seems...just parroting PR and cherry picking to suit his bias confirmation....facts be damned!!!

Do you have anything at all? You haven't proven a thing nor given a single piece of evidence contrary to our point. I have linked proof as it pertains to my point from independent sources that for the most part gave impartial information on the subject at hand.

So either post something in support of your point or shutup. Easy as that.
 
What part of post#263 did you not understand.

You agree when it works in your favor but the fact is since Maxwells release AMD has only had a few driver updates so I am less inclined to believe that it is GCN increasing by bounds.
 
Do you have anything at all? You haven't proven a thing nor given a single piece of evidence contrary to our point. I have linked proof as it pertains to my point from independent sources that for the most part gave impartial information on the subject at hand.

So either post something in support of your point or shutup. Easy as that.

Look at the thread...I have posted more factual information than you.
So why don't you start documenting things...or take you own advice.

Do you even compile? :p
 
You agree when it works in your favor but the fact is since Maxwells release AMD has only had a few driver updates so I am less inclined to believe that it is GCN increasing by bounds.

GNC is not increasing by bonds...looking at older games, the performance delta is the SAME.
Looking a newer games (that does more than before) you start to see the architechtual diffrence between "Kepler" and "Maxwell".

It's as simple as that.
If your notion should hold any water...GK110 should be losing performance in older titles too.
Since this does NOT happen...you do the math.
 
Look at the thread...I have posted more factual information than you.
So why don't you start documenting things...or take you own advice.

Do you even compile? :p

GNC is not increasing by bonds...looking at older games, the performance delta is the SAME.
Looking a newer games (that does more than before) you start to see the architechtual diffrence between "Kepler" and "Maxwell".

It's as simple as that.
If your notion should hold any water...GK110 should be losing performance in older titles too.
Since this does NOT happen...you do the math.

Sorry but that is a failing argument. You have yet to prove AMD has the same access to optimize as NVIDIA which is what you are alluding to. And before you say you are not my argument ALL ALONG has been that AMD can not completely optimize 100% because there are some unknowns in the pipeline, again what the article I linked with impartial sources has affirmed. And that they CAN optimize but not likely with the ease of NVIDIA or in as timely a manner or with any certainty as to how much or how close to 100% opportunity as they would get if they had the same access.

And NO ONE has said they are intentionally back peddling the performance thru drivers, they, and I have to agree, they are not increasing the performance at all thru drivers. So no their performance in older games has not declined but no one other than you has made that assertion. If you aren't an Nvidia paid shill they should pay you because that is what you do.

And as a bit of a side note, for the love of God learn to proof read your posts. So many wrong words and misspells. If you wish to have adequate debates and be taken seriously it helps when it looks like you take the time to ensure your debate is fluent and comprehendible.
 
Thanks for proving my point about certain behaviours on the forum ;)

I was just reinforcing your post about Kepler and Maxwell. Kepler just can't hang with Maxwell and certainly not the Big Red Machine.
 
lol what you don't have a PPT plugin for your browser sorry not my fault.

I agreed with what he said lol, read fist sentence in my response.

Sorry probably shouldn't have lumped you in with the other moron. I feel you are trying to be reasonable at least.

But in any case the argument ends here because when you stated this:

Granted it would be easier to do the changes with the source code, but when they don't' have it, they can still do many things.


You were in fact agreeing with my whole point. I am not arguing black and white as some less than reputable posters in here are, on both sides. I am arguing the grey where the facts of the case are. You can read my post above this to see that as well.
 
Just shows you how much more advanced AMD technology is that they can catch and supersede performance tiers with Nvidia cards this late in the product cycle. It is amazing what over engineering your cards will do in the long run. I suspect that they will get even faster in a few more hours.

Kudos to AMD for putting such a forward thinking product.

It really boggles the mind that some are trying to defend Nvidia in this. It's either: A) they slowed down their earlier cards to make their newer ones look better, B) their Kepler architecture wasn't as good as they advertized and we thought compared to Hawaii. Both make Nvidia look bad, and both have complications for their future cards. What if the same happens with 980 after Pascal is released?

But unfortunately, thanks to the army of fanboys, this will be forgotten soon.
 
It really boggles the mind that some are trying to defend Nvidia in this. It's either: A) they slowed down their earlier cards to make their newer ones look better, B) their Kepler architecture wasn't as good as they advertized and we thought compared to Hawaii. Both make Nvidia look bad, and both have complications for their future cards. What if the same happens with 980 after Pascal is released?

But unfortunately, thanks to the army of fanboys, this will be forgotten soon.

Exactly. +1000.
 
Sorry but that is a failing argument. You have yet to prove AMD has the same access to optimize as NVIDIA which is what you are alluding to. And before you say you are not my argument ALL ALONG has been that AMD can not completely optimize 100% because there are some unknowns in the pipeline, again what the article I linked with impartial sources has affirmed. And that they CAN optimize but not likely with the ease of NVIDIA or in as timely a manner or with any certainty as to how much or how close to 100% opportunity as they would get if they had the same access.

And NO ONE has said they are intentionally back peddling the performance thru drivers, they, and I have to agree, they are not increasing the performance at all thru drivers. So no their performance in older games has not declined but no one other than you has made that assertion. If you aren't an Nvidia paid shill they should pay you because that is what you do.

And as a bit of a side note, for the love of God learn to proof read your posts. So many wrong words and misspells. If you wish to have adequate debates and be taken seriously it helps when it looks like you take the time to ensure your debate is fluent and comprehendible.

I can explain it to you, but I cannot understand it for you.

But I like how the tune is not "Gameworks is black box, AMD can do nothing to optimize!"
Now it has transformed into "Gameworks is black box, AMD cannot do 100% the same as NVIDIA!".

I wonder how much "thinner" we can make this "argument"..since it based on PR FUD not facts ;)

Why don't you start adressing the TECHNICAL aspects of my post, instead of trying to go all personal...start a fight and try to mod involved...your post stinks of an attempt at derailing...because it lacking any tehcnical response, and is all emotional.

So I will only respond to the techincal parts of you posts for the future and ignore:
- Fallacy of shifting the burden of proof (AMD started whining...but they havn't documented anything.)
- Ad Hominem (I wish NVIDIA would pay to game, but alas I am forced to work like most other people)
- Red herrings.

That leaves me the following to respond to, when cutting in your post:

" "

Have a nice day :)
 
Back
Top