AMD Briefing With Eric Demers Slide Deck @ [H]

This thread isn't about any "doom" for nvidia, its about slides posted by ATI stating their architecture is badass (and it is) and how Nvidia looks to be moving more away from gamers and more twords absolute computing.

Again people posting about "doom" for nvidia are not reading these posts, unless you are talking about their chipset.

The funny thing is it's totally predictable that AMD might release something like these slides. A company doesn't have to move away from gaming to move towards other functions.. I don't understand why people assume they're mutually exclusive. Huge fallacy there.
 
On an interesting note, if ATI is going to be making the next iteration of XBOX, what stops them from pushing DX11? Console gaming may dominate the sales, but when ATI is makin' the console, why would they not push for DX11 on the next iteration? Obviously if currently consoles are running on a hybrid DX9, nothing is going to change, but if the new consoles are DX11 compliant, this would once again bridge a gap in PCs and consoles.

it is to be expected that the xbox720/ ps4 will have gpus that are at least dx11 compliant when they are supposed to be released 2012 or later. possibly nintendo may as well. i imagine they should have more than enough power to render all games in native 1080p w/ high AA this time around.
 
I was referring to the "drinking the Koolaid" remark which was a popular phrase used by republicans in the past year or two

ie "Drinking the Obama Koolaid"
 
Sorry Snow, but I have to tell you that Trinibwoy is one of the most informed people on technical forums right now. I've seen his posts here, on Beyond3d, and XtremeSystems. I've never seen him make a post where he was incorrect.

It seems the argument is about the precise definition of something I don't care about. Being a non-technical person, just an enthusiast, I tend to believe the people who present that thingy we call proof. Not just typing "LOL!!!!", as I said several times.

And it still seems to me that ATi's GPU has more power to extract as the compiler is improved upon. If the GPU is depending on software to schedule the instructions in a 'better' way then can this be done in the driver?

Maybe someone who supports the so-called Nvidia side of things can actually post something informative instead of just "LOL!@!!!!!" Also having so many douchebags like Atech around are really making me want to toss my 8800GTX and upgrade to a 5870 right now rather then wait till X-mas time. Is there some reason Atech refuses to answer the relevant question of whether he works for Nvidia or not?
 
Is there some reason Atech refuses to answer the relevant question of whether he works for Nvidia or not?

I didn't see the question if you asked it as I have you on ignore...nice fallacy :)
I have no affiliation with
NVIDIA
AMD
INTEL
VIA
BIGFOOTNETWORKS
CREATIVE
*insert your company of choice*

The only IT-company I have affiliation with is Fullrate A/S ( www.fullrate.dk ) the ISP for whom I work....keep 'em comming...always funny to watch people graps for straws ;)

Next week:
Snowden looks into Bigfoot...fake..or long lost relative?
 
Really?! 30% is kick ass? Sure its kick ass against ATIs own last gen stuff, But against Nvidia's, I'd hardly call it kick ass. As far as ATI goes, the saying is, those who live in glass houses shouldn't throw rocks. ATI will be in a world of hurt in terms of gaming performance in 60 days and those slides will look very stupid come then as will Charlie D.
30 percent is lowest one increase resolution and watch 5870 kick some ass lol
 
30 percent is lowest one increase resolution and watch 5870 kick some ass lol

30% is across the board average win percentage against the GTX285. 70% is the average across teh board it beats teh 4870 at, 60% against 4890. Again, 30% IS NOT kick ass, but it is damn good.
 
30% is across the board average win percentage against the GTX285. 70% is the average across teh board it beats teh 4870 at, 60% against 4890. Again, 30% IS NOT kick ass, but it is damn good.
Ok I'll bite.
What numbers minimum pct wise should Fermi score across the board then to be kick ass in your view? (compared to GTX285)
 
I didn't see the question if you asked it as I have you on ignore...nice fallacy :)
I have no affiliation with
NVIDIA
AMD
INTEL
VIA
BIGFOOTNETWORKS
CREATIVE
*insert your company of choice*

The only IT-company I have affiliation with is Fullrate A/S ( www.fullrate.dk ) the ISP for whom I work....keep 'em comming...always funny to watch people graps for straws ;)

Next week:
Snowden looks into Bigfoot...fake..or long lost relative?
I would've guessed Killer NIC only cuz it seems so prominently displayed in your sig. I'm jealous.


who would fall for that marketing gimmick lol!?
 
Last edited:
Sorry Snow, but I have to tell you that Trinibwoy is one of the most informed people on technical forums right now. I've seen his posts here, on Beyond3d, and XtremeSystems. I've never seen him make a post where he was incorrect.
I'm not able to say who is right or wrong in the discussion of superscalar vs. VLIW (no where near the technical proficiency to do so), but I will say this in regards to using the argument of never having seen an incorrect statement as proof: "Absence of evidence is not evidence of absence." ;)

Oh, and there was one semi-incorrect statement previously made by him: "A white paper is a marketing document fyi, not a technical specification."

A "white paper" was utilized in the scientific and government fields long before it started to get associated to "marketing documents" in the tech world. The term "white paper" was (and still is at times) used interchangeably with "research paper" because the original idea behind both was that they provided data and research to solve an issue.

In fact, when I went to a NASA Space Grant symposium in college, they actually billed it at times as students presenting research results and white papers for academic year grant research.

What ended up happening is that the marketing departments of corporations started creating documents that would present a problem, and then would present "data" on how their product could sovle this problem. They started calling them white papers (and while working for an IT company the past few years, I even saw a few coming through claiming to be "research papers", but they were in fact just marketing documents), and thus the term is somewhat improperly utilized.

However, it's inaccurate to label it as a marketing document / sales ad. It's one of the definitions now (unfortunately), but not the only/primary one.
 
Compared to GTX285, 70-80% average, to 5870, 40-50%.


That's a bit steep. We are talking about competing cards from the same generation if Nv releases in the next few months, not next gen vs last gen. Every one has their own idea of kick ass I guess.
 
It was way more statistical accuracy than eg. those gallup poles you see at at elections...sorry to burst you bubble, by it isn't the people at [H] that makes AMD's or NVIDIA's bread and butter...that is done by Bob Joe...and there is a good chance he is sporting a G80.

You need to take a trip to Oz my friend, you smell horribly of straw :D


Doh, you don't know what that means do you? Aww shit! Well, I guess the rest of the forum will have to enjoy without you.


P.S. All you fanboys are idiots! Thats right, I'm talking to you! ;)
 
Last edited:
30% is across the board average win percentage against the GTX285. 70% is the average across teh board it beats teh 4870 at, 60% against 4890. Again, 30% IS NOT kick ass, but it is damn good.
hmmm may be i misunderstood are u saying it won only 30 percent of benchs against gtx 285 ?
 
A couple of things first:
1) I use a EVGA GeForce GTX 280, but I also own a ATI HD4890 and a ATI HD4870, and many other cards. I'm not a fanboy of any company, I buy hardware that suits my needs and my budget. :D

2) I've read many of the fanboy posts here, and the truth is that fanboys are people with way to much time on their hands. Most people are busy with work, and that's why most people that buy gaming hardware, buy what ever they consider that fits their needs to enjoy recreational computer gaming. That being said, I'd love to see less fanboy talk. NVidia, AMD/ATI, Intel, and others don't really care about you or me, all they care about is their business, so I pity those that defend one or the other.

The Mud Slinging and Bad Marketing:
I've seen the slides in the presentation, and I don't like the fact that AMD is slinging mud or pointing out what they believe that their competition does wrong. It's getting to the point where it's stupid, and every company that behaves in such a manner shows fear, and tells the public something like this: "Our product is almost as good as the competition..."; or something like this: "We know that our products are slightly inferior, but in order to be able to sell our stuff, we have to sling mud at the competition..."; or something like this: "We feel threatened by the competition, and that's why you should support us...". There is a long history of this happening in the computer industry, and usually the companies that had an inferior product to the competition, or no product at all to compete have done things like this. Some of the more bizarre attacks I've seen:
  • Apple attacking Intel (back in the P4 days) - we all know that the G4/G5 where horrible desktop CPUs
  • AMD attacking Intel back in the AMD K5/K6 days, and then in the Athlon days... heck, they've even donne recently with the whole "True Quad Core" crap
  • ATI/AMD slinging mud at NVidia recently (like in the presentation), even if they did it subtle
  • NVidia slinging mud at Intel, most of the time in a very unproffessional manner

This mud slinging strategy sucks, and it bothers the hell out of me, because I sell computer systems for a living, and I have to directly deal with most of the disinformation created by all the FUD, mud slinging and bad marketing. I mean, I had customers wanting to upgrade from a GeForce 9800 GTX to a GeForce GTS 250... yet they are essentially the same. I know that the folks that read and post on this forums are well educated when it comes to hardware, but Joe Sixpack isn't.

The Future:
I don't have a crystal ball to see the future, so what I'm saying is based on simple logic. NVidia is nowhere near doomed, but it's slowly loosing ground. They've done stupid things because they're trying to protect their income. The've been hanging on to SLI so hard, that Intel basically denied them another license to build chipsets for their Nehalem architecture (Core i7). They had to enable SLI on X58 and P55 boards. Would they not have been so stubborn in the past, it might not have come to this. That being said, I never liked NVidia chipsets, with the nForce 680i being notoriously horrible. Then PhysX: another dying standard, that didn't even get a chance to become a standard. NVidia claims that it's an open standard, yet the only people that's it's open to, is those that are willing to write games or applications that use PhysX. Further more, NVidia disabled PhysX support in their drivers if you have a competitor's card installed in your system (ATI). So folks that have, lets say, upgraded from a GeForce 8800 GTX to a ATI Radeon HD5870 won't be able to keep their old card as a PhysX accelerator. I'd say that it was a bad decision, not to mention that it's a bad PR move. Also, as a side note, NVIdia's multi monitor support sucks.
So PhysX will be a short lived thing for NVidia. So what's left for NVidia? Well, a little bit of the embeded market, the discrete graphics card market (gaming), and Tesla (CGPU). The chipset market is no more for NVidia (there is very little left, and once the Core 2 generation is gone, so is what's left of NVidia's chipset market.

The mud that NVidia is slinging at Intel about Larabee is just that: mud. The truth is that NVidia is terrified, because from I read, Larabee will be one very interesting GPGPU. Not even AMD has anything to compete with it. Larabee will be x86 based, and most likely, when it will come out next year, it will also put an end to this "because of a newer version of DirectX we have to upgrade the Video Card" crap. What will be even more interesting is that Larabee will also be very scalable, and it could be used in small devices, small laptops as a "platform on chip" sort of thing. Now how could NVidia compete with that? They can't. Hence "Fermi" was born. They're trying to move in that direction, but even if they do, they don't have an x86 license, and I doubt that Intel will sell them one. Yet NVidia isn't stupid, or short sighted. They are thinking in the long run, and their biggest concern right now is to convince enough companies and people to write software that is using their architecture (CUDA). They know that their days are numbered as just a discrete GPU manufacturer. Graphic cards are becoming a commodity, as they should, and IMHO everyone should be able this days to be able to purchase a decent graphics card for around $100. Now, if NVidia had their way, like in the G80 days, we would still have $800 cards, but those days are gone, and AMD/ATI pretty much made sure of that. So NVidia is not doomed (yet), but they know that if they don't do something about it, they're not going to be around in ten years from now, and five years from now they will be an insignificant player.

So what's happening with ATI/AMD? Well maybe I'm not that up to date on them, but I have't seen anything from AMD that looks like anything even close to Larabee, except for the claim that sometime in 2011 they will bring "Fusion" to the market. So despite the fact that AMD is a CPU and GPU maker, they can't seem to get their act together. It remains to be seen, but despite all the financial trouble that AMD is facing, they will be around for a very long time. The only question is, how's their GPU architecture going to look like after "Evergree" (the HD 5000 series)? They can't go on for ever and just add more shaders and shrink the die, they have to make major improvements at some point.

Intel has abandoned discrete graphics in the late 90's. Anyone remember the Intel 740i cards? I do, cause I had one with 8MB Ram. It was crap, but I was able to play Quake 2 with it. So they have zilch experience when it comes to gaming graphic cards. Yet Larabee is just around the corner, and I think that it will be one of their biggest money makers. It doesn't have to be the best, just good enough and cheap. If history has tough us something, then it's this: if a product is on average just good enough, yet affordable by the masses, then it will sell like crazy, and dominate it's market segment, or even overtake other segments. So imagine small low cost desktops running a Larabee GPGPU at it's core that's performing all the functions of a CPU and GPU, and it's fast to. Basically all that Larabee will be is a x86 CPU with a bunch of simple cores (80 or more) that will do all the 3D Rendering in software, with just a handful of dedicated 3D functions. This is what NVidia really fears. Maybe AMD has a plan about how to bring Fusion to the market, and they're only limited by their financial resources, because they have the technology and the engineering talent, and they also work with Global Foundries, so they have the production capacity. But NVidia doesn't have an x86 license, and no fabs. That is why "Fermi" exists and that is why NVidia came up with this architecture.

Performance wise I believe that the hight end single GPU "Fermi" card will be about twice as fast as the current GTX 285 1GB (at least in theory), and maybe 70~80% in practice. Nvidia's focus will be to make it slightly faster than the Radeon HD 5870. The problem with "Fermi" will be that since it has about 3 Billion transistors, it will be a bitch for NVidia to make a dual GPU card with it (heat, power consumption and all that good stuff).

So all I have to say to you guys is that those of you who purchased recently a HD 5870 enjoy it because it's a good card, those of you who upgraded from a GTX 285 to a HD 5850, well, I'm sorry but you've wasted your time and money because it's not that much better, and those of you who're curious about what NVidia will release, should hold off. And don't worry, NVidia will bring out new cards. I, for one, am not as concerned about DirectX 11, as I am about better performance. Anyway, I have to go to work now, so take care, and thanks for reading my post.
 
hmmm may be i misunderstood are u saying it won only 30 percent of benchs against gtx 285 ?


No, what I'm saying is that if you total up all the %s of the 5870 beating the GTX285 and the 4870 and then divide them, the average win against the GTX285 is 30% and 70% against the 4870. Which means, on average, the 5870 is 30% faster than a GTX285 and 70% faster than a 4870.
 
A couple of things first:
1) I use a EVGA GeForce GTX 280, but I also own a ATI HD4890 and a ATI HD4870, and many other cards. I'm not a fanboy of any company, I buy hardware that suits my needs and my budget. :D

2) I've read many of the fanboy posts here, and the truth is that fanboys are people with way to much time on their hands. Most people are busy with work, and that's why most people that buy gaming hardware, buy what ever they consider that fits their needs to enjoy recreational computer gaming. That being said, I'd love to see less fanboy talk. NVidia, AMD/ATI, Intel, and others don't really care about you or me, all they care about is their business, so I pity those that defend one or the other.

The Mud Slinging and Bad Marketing:
I've seen the slides in the presentation, and I don't like the fact that AMD is slinging mud or pointing out what they believe that their competition does wrong. It's getting to the point where it's stupid, and every company that behaves in such a manner shows fear, and tells the public something like this: "Our product is almost as good as the competition..."; or something like this: "We know that our products are slightly inferior, but in order to be able to sell our stuff, we have to sling mud at the competition..."; or something like this: "We feel threatened by the competition, and that's why you should support us...". There is a long history of this happening in the computer industry, and usually the companies that had an inferior product to the competition, or no product at all to compete have done things like this. Some of the more bizarre attacks I've seen:
  • Apple attacking Intel (back in the P4 days) - we all know that the G4/G5 where horrible desktop CPUs
  • AMD attacking Intel back in the AMD K5/K6 days, and then in the Athlon days... heck, they've even donne recently with the whole "True Quad Core" crap
  • ATI/AMD slinging mud at NVidia recently (like in the presentation), even if they did it subtle
  • NVidia slinging mud at Intel, most of the time in a very unproffessional manner

This mud slinging strategy sucks, and it bothers the hell out of me, because I sell computer systems for a living, and I have to directly deal with most of the disinformation created by all the FUD, mud slinging and bad marketing. I mean, I had customers wanting to upgrade from a GeForce 9800 GTX to a GeForce GTS 250... yet they are essentially the same. I know that the folks that read and post on this forums are well educated when it comes to hardware, but Joe Sixpack isn't.

The Future:
I don't have a crystal ball to see the future, so what I'm saying is based on simple logic. NVidia is nowhere near doomed, but it's slowly loosing ground. They've done stupid things because they're trying to protect their income. The've been hanging on to SLI so hard, that Intel basically denied them another license to build chipsets for their Nehalem architecture (Core i7). They had to enable SLI on X58 and P55 boards. Would they not have been so stubborn in the past, it might not have come to this. That being said, I never liked NVidia chipsets, with the nForce 680i being notoriously horrible. Then PhysX: another dying standard, that didn't even get a chance to become a standard. NVidia claims that it's an open standard, yet the only people that's it's open to, is those that are willing to write games or applications that use PhysX. Further more, NVidia disabled PhysX support in their drivers if you have a competitor's card installed in your system (ATI). So folks that have, lets say, upgraded from a GeForce 8800 GTX to a ATI Radeon HD5870 won't be able to keep their old card as a PhysX accelerator. I'd say that it was a bad decision, not to mention that it's a bad PR move. Also, as a side note, NVIdia's multi monitor support sucks.
So PhysX will be a short lived thing for NVidia. So what's left for NVidia? Well, a little bit of the embeded market, the discrete graphics card market (gaming), and Tesla (CGPU). The chipset market is no more for NVidia (there is very little left, and once the Core 2 generation is gone, so is what's left of NVidia's chipset market.

The mud that NVidia is slinging at Intel about Larabee is just that: mud. The truth is that NVidia is terrified, because from I read, Larabee will be one very interesting GPGPU. Not even AMD has anything to compete with it. Larabee will be x86 based, and most likely, when it will come out next year, it will also put an end to this "because of a newer version of DirectX we have to upgrade the Video Card" crap. What will be even more interesting is that Larabee will also be very scalable, and it could be used in small devices, small laptops as a "platform on chip" sort of thing. Now how could NVidia compete with that? They can't. Hence "Fermi" was born. They're trying to move in that direction, but even if they do, they don't have an x86 license, and I doubt that Intel will sell them one. Yet NVidia isn't stupid, or short sighted. They are thinking in the long run, and their biggest concern right now is to convince enough companies and people to write software that is using their architecture (CUDA). They know that their days are numbered as just a discrete GPU manufacturer. Graphic cards are becoming a commodity, as they should, and IMHO everyone should be able this days to be able to purchase a decent graphics card for around $100. Now, if NVidia had their way, like in the G80 days, we would still have $800 cards, but those days are gone, and AMD/ATI pretty much made sure of that. So NVidia is not doomed (yet), but they know that if they don't do something about it, they're not going to be around in ten years from now, and five years from now they will be an insignificant player.

So what's happening with ATI/AMD? Well maybe I'm not that up to date on them, but I have't seen anything from AMD that looks like anything even close to Larabee, except for the claim that sometime in 2011 they will bring "Fusion" to the market. So despite the fact that AMD is a CPU and GPU maker, they can't seem to get their act together. It remains to be seen, but despite all the financial trouble that AMD is facing, they will be around for a very long time. The only question is, how's their GPU architecture going to look like after "Evergree" (the HD 5000 series)? They can't go on for ever and just add more shaders and shrink the die, they have to make major improvements at some point.

Intel has abandoned discrete graphics in the late 90's. Anyone remember the Intel 740i cards? I do, cause I had one with 8MB Ram. It was crap, but I was able to play Quake 2 with it. So they have zilch experience when it comes to gaming graphic cards. Yet Larabee is just around the corner, and I think that it will be one of their biggest money makers. It doesn't have to be the best, just good enough and cheap. If history has tough us something, then it's this: if a product is on average just good enough, yet affordable by the masses, then it will sell like crazy, and dominate it's market segment, or even overtake other segments. So imagine small low cost desktops running a Larabee GPGPU at it's core that's performing all the functions of a CPU and GPU, and it's fast to. Basically all that Larabee will be is a x86 CPU with a bunch of simple cores (80 or more) that will do all the 3D Rendering in software, with just a handful of dedicated 3D functions. This is what NVidia really fears. Maybe AMD has a plan about how to bring Fusion to the market, and they're only limited by their financial resources, because they have the technology and the engineering talent, and they also work with Global Foundries, so they have the production capacity. But NVidia doesn't have an x86 license, and no fabs. That is why "Fermi" exists and that is why NVidia came up with this architecture.

Performance wise I believe that the hight end single GPU "Fermi" card will be about twice as fast as the current GTX 285 1GB (at least in theory), and maybe 70~80% in practice. Nvidia's focus will be to make it slightly faster than the Radeon HD 5870. The problem with "Fermi" will be that since it has about 3 Billion transistors, it will be a bitch for NVidia to make a dual GPU card with it (heat, power consumption and all that good stuff).

So all I have to say to you guys is that those of you who purchased recently a HD 5870 enjoy it because it's a good card, those of you who upgraded from a GTX 285 to a HD 5850, well, I'm sorry but you've wasted your time and money because it's not that much better, and those of you who're curious about what NVidia will release, should hold off. And don't worry, NVidia will bring out new cards. I, for one, am not as concerned about DirectX 11, as I am about better performance. Anyway, I have to go to work now, so take care, and thanks for reading my post.

interesting thoughts. still pretty skeptical about the larrabee project though. we'll see...
 
Larabee will be x86 based, and most likely, when it will come out next year, it will also put an end to this "because of a newer version of DirectX we have to upgrade the Video Card" crap.

Larrabee's scalar core is x86, graphics will be running primarily on the new vector hardware. It won't be our savior either as the first iteration still has fixed function units i.e obsoletion will still be the name of the game. However, in a few years everybody, not just Intel will be able to support new APIs via software updates.

What will be even more interesting is that Larabee will also be very scalable, and it could be used in small devices, small laptops as a "platform on chip" sort of thing.

What makes Larrabee any more scalable than current graphics architectures? GT2xx ranges from 1 to 30 cores, RV7xx 1 to 10.

I don't see how anyone can claim that the graphics giants are reacting to Larrabee. They aren't new to this game and had their own visions for the future long before the name Larrabee was ever whispered. G80 set the stage for Fermi and G80 development started in 2002. So can Nvidia predict Intel's future or is it that they actually know what they're doing?
 
Sorry Snow, but I have to tell you that Trinibwoy is one of the most informed people on technical forums right now. I've seen his posts here, on Beyond3d, and XtremeSystems. I've never seen him make a post where he was incorrect.


+1 totally agree.
 
I been reading both Press releases from ATI and Nvidia.
Let me first state I have two computers,One has a Nvidia GTS8800(older),the newer a ATi 4870x2.
I have mostly always had Nvidia.But I go were the market offers me the best price/performance.
I'm no ones fan boy.
Like most people here I am power user in the sense that I like purchase the best "power" option componet wise at the time of building a rig..

But what you have to realise is we are the minority.
I work for a electrical wholeseller.Whys that relivent you may ask?
Because I get my tech "fix" for free from one of our sister companys(gotta love "Corporate synergy")
So I don't care much about price,others here will pay a premium for power because they want it.
Most people don't.
They don't want to pay the price that goes with cutting edge or close "as".
They want middle of the road cheap.
That is were the money is,that were the units that ship are,thats were the higher profit is,as we say the "Sweet spot" in a brand or product line up is.
Yesterdays top performance at low prices based on revamped proven designs or cut down ecomonical versions of the newer GPU's.
ATI have always been in the business of focusing on that market and today that no exception.
They have had their canadian fingers in this pie before Nvidia was invented,they know the landscape well.
Look at their line up,They are not resting on the HD4770 they are going stright to the HD5770 & HD5750
The are CRANKING the cycle faster than Nvidia can handle.Take a look at the HD5770,the dam things nearly as fast as a HD4890!!
Were talking about a 120 pound card here!!!,That they could drop sub 100 if they wanted to.
ITS JUST THAT THEY DON'T HAVE TOO.
Now I know know they haven't got Nvidia in the same place Nvidia had 3dfx(Voodoo) all thou's years ago but its close.
A 9800GTX cannot compete with this level of performance and Nvidia have nothing to touch that.
Don't even get started on the DX11 factor.
If I was a vendour placing my "Big" seasonal orders for this important segment of the market I know were I'd place them.
And it wouldn't be with Nvidia.

So the upshot of all this press release shenanigans?
So Nvidia are coming out with whatever they have,even if its "just on paper" and ATI are trying turn the "screws on them".Why?
because they can smell blood in the water making a bid at gaining some of Nvidias market share.
Lets just say the timing(christmas/seansonal market)has never been more perfect for ATI.
Hope Nvidia can weather the storm,I think they can take the hard steps even if they have to slash prices to stay in the game.
Good for us,costly for Nvidia,less money to reinvest in the next cycle.
If anyone comes out a winner besides the consumer it will be ATI.
 
Last edited:
I don't see how anyone can claim that the graphics giants are reacting to Larrabee. They aren't new to this game and had their own visions for the future long before the name Larrabee was ever whispered. G80 set the stage for Fermi and G80 development started in 2002. So can Nvidia predict Intel's future or is it that they actually know what they're doing?
I remember reading somewhere that NVidia started development on their G80 architecture circa 2002. Which also makes me wonder why they had that gigantic flop called the FX Series (NV30 I believe). Anyway, I believe that you're right about what you've said about Larabee, however, keep in mind that Intel won't be aiming for the high end market in the beginning. If they won't be able to make Larabee competitive with the high end GPUs at the time of its release, then they will settle for the low end and mid range markets. R&D costs for GPUs are trough the roof, and Intel knows this, otherwise they wouldn't have abandoned that market in the late 90's (remember the i740 GPU). What Intel wants to compete against this time around is not the high end GPUs, but against the GPGPUs coming out of NVidia and ATI. I mean, just common sense tells me that since Larabee will be an x86 based GPGPU, Intel will be able to market it as a platform on a chip. All it comes down to is performance, and how fast they'll be able to make it. If it performs, then we will see Larabee in allot of high end applications, including discrete gaming GPUs, and if it doesn't, then we'll see Larabee in a bunch of low priced laptops and low end desktops. Only time will tell, like I've said before, I don't have a crystal ball to see the future. And you know what they say about those that live by the crystal ball: they eat glass... One thing is for sure do: NVidia is concerned about Larabee and their own future, way more than AMD/ATI, because NVidia can't touch x86, and that's why they won't be able to make anything close to Larabee, or AMD's Fusion. Again, we'll see, and for now, we'll have to make due with what we have and what's available, and enjoy the hardware that we have.

Here is an article about Larabee from August, 2008: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3367
 
Last edited:
No, what I'm saying is that if you total up all the %s of the 5870 beating the GTX285 and the 4870 and then divide them, the average win against the GTX285 is 30% and 70% against the 4870. Which means, on average, the 5870 is 30% faster than a GTX285 and 70% faster than a 4870.

You are doing some sort of drugs because you are pulling shit right out of your ass. The 5850 is 10%-20% faster than the 285 in everything except for a couple of games. Every credible benchmark shows this. Where are you getting your information because it is made up fanboy bullshit.

You may actually have something there though. If I make up facts with no evidence to support them and massive quantities of evidence to disprove them, I can force myself to believe the garbage and be an extremely happy person!

Hey everyone, I believe that moon is simply the backside of the sun. I know this because there are 15 studies that prove my theory. These studies are very accurate and all come from extremely credible sources such as various universities. You guys are idiots if you want to ignore all this evidence i've laid on the table for you to see.

See, I feel better already!
 
You are doing some sort of drugs because you are pulling shit right out of your ass. The 5850 is 10%-20% faster than the 285 in everything except for a couple of games. Every credible benchmark shows this. Where are you getting your information because it is made up fanboy bullshit.

You may actually have something there though. If I make up facts with no evidence to support them and massive quantities of evidence to disprove them, I can force myself to believe the garbage and be an extremely happy person!

Hey everyone, I believe that moon is simply the backside of the sun. I know this because there are 15 studies that prove my theory. These studies are very accurate and all come from extremely credible sources such as various universities. You guys are idiots if you want to ignore all this evidence i've laid on the table for you to see.

See, I feel better already!

Your the one on drugs, go back and look, every site I've seen, AnAnd, Toms, Firing, Here, the OVERALL AVERAGE(you do know what this is dont you?) is 30% GTX285 vs 5870 and 70% 4870 vs 5870.
 
What a waste of time this has been. I'm done with all this silly squabbling and these silly people with unnatural attachments to specific corporations.
 
What a waste of time this has been. I'm done with all this silly squabbling and these silly people with unnatural attachments to specific corporations.

Like I've said before, people with way to much time on their hands. The other sick thing is that some of this people will buy a certain product made by a certain corporation just because they have some kind of emotional attachment to the brand. I assume that marketing strategies work best on idiots. And with that being said, I believe that NVidia has some of the most briliant marketing campains for idiots... My favorite is their re-branding strategy during the G80/G92 days. For example 8800GT -> 9800GT, 8800GTS (G92 version) -> 9800GTX and so on... And it worked. ATI is a way better company when it comes to this stuff. Now, I can't wait for the fanboys to byte on my flame-bate... Idiots...
 
Your the one on drugs, go back and look, every site I've seen, AnAnd, Toms, Firing, Here, the OVERALL AVERAGE(you do know what this is dont you?) is 30% GTX285 vs 5870 and 70% 4870 vs 5870.

Sadly, he's actually correct about the gtx 285 versus the 5870 thing. Whether he's on drugs or not, who knows. Some people need stuff like viagra or riddilin to help them perform better ;) so drugs might not always be a bad thing.

If you take the average of the four games apple to apple comparison's avg frame rates from this review on hardocp, it comes to about 30%:

http://hardforum.com/newreply.php?do=newreply&p=1034773843

The percentages are roughly I believe 18%, 24%, 34% and 51%. Which averages to around 32%. The problem is really Need For Speed shift which is the 18%. If ATI could fix that game, the 5870 would have a better than a 32% advantage over the GTX 285. I wish there were more games to average in this review to balance out the nfs: shift's bad performance issue.
 
Sadly, he's actually correct about the gtx 285 versus the 5870 thing. Whether he's on drugs or not, who knows. Some people need stuff like viagra or riddilin to help them perform better ;) so drugs might not always be a bad thing.

If you take the average of the four games apple to apple comparison's avg frame rates from this review on hardocp, it comes to about 30%:

http://hardforum.com/newreply.php?do=newreply&p=1034773843

The percentages are roughly I believe 18%, 24%, 34% and 51%. Which averages to around 32%. The problem is really Need For Speed shift which is the 18%. If ATI could fix that game, the 5870 would have a better than a 32% advantage over the GTX 285. I wish there were more games to average in this review to balance out the nfs: shift's bad performance issue.

The reason why ATI will never be able to fix Need For Speed Shift is because EA decided, in their infinite wisdom to use NVidia PhysX to create the physics simulations in the game. So when the PhysX driver can't find any PhysX compatible hardware, it uses a wrapper and PhysX calculations fall back to the CPU. I'm using a Q9550, and together with a Radeon HD4980 1GB Need For Speed Shift was unplayable. I mean it was playable as long as I was racing alone, but as soon as a couple of other cars showed up, everything was slowing down to a crawl or hanging. As soon as I popped in my GeForce GTX 280 I could play the game perfectly. I guess that everyone that bench marked the HD5870 has done so using a Core i7, probably overclocked. Need For Speed Shift might have shown worst performance on the HD5870 than on a GTX 260, if anyone who's bench marked the HD5870 would have used a Core 2 Quad or Duo. And what's worst is that you can't turn off PhysX in NFS Shift, because that game doesn't have an alternate PhysX engine. Way to go EA...
 
The reason why ATI will never be able to fix Need For Speed Shift is because EA decided, in their infinite wisdom to use NVidia PhysX to create the physics simulations in the game. So when the PhysX driver can't find any PhysX compatible hardware, it uses a wrapper and PhysX calculations fall back to the CPU. I'm using a Q9550, and together with a Radeon HD4980 1GB Need For Speed Shift was unplayable. I mean it was playable as long as I was racing alone, but as soon as a couple of other cars showed up, everything was slowing down to a crawl or hanging. As soon as I popped in my GeForce GTX 280 I could play the game perfectly. I guess that everyone that bench marked the HD5870 has done so using a Core i7, probably overclocked. Need For Speed Shift might have shown worst performance on the HD5870 than on a GTX 260, if anyone who's bench marked the HD5870 would have used a Core 2 Quad or Duo. And what's worst is that you can't turn off PhysX in NFS Shift, because that game doesn't have an alternate PhysX engine. Way to go EA...

There is only CPU physX in "NFS- Shift", not GPU PhysX...stop the BS.

Repeat after me:
There is no GPU-PhysX in "NFS - Drift"

kthxbye.
 
There is only CPU physX in "NFS- Shift", not GPU PhysX...stop the BS.

Repeat after me:
There is no GPU-PhysX in "NFS - Drift"

kthxbye.

I'm not talking Bull Shit. When I had my ATI HD4890 installed, the Need For Speed Shit installer installed NVidia's PhysX driver on my PC. Why do you suppose it did that? Because the game is using NVidia's PhysX engine for physics calculations. But since ATI cards don't support PhysX (because PhysX is owned by NVidia), PhysX rendering falls back to the CPU. Hence the poor performance in NFS Shift on ATI cards. In other games you can turn PhysX completely off, and then it's a fair comparison, but in NFS Shift you can't. Google NFS Shift PhysX and let me know what you'll find. You'll see that I'm not BS-ing here...
 
I'm not talking Bull Shit. When I had my ATI HD4890 installed, the Need For Speed Shit installer installed NVidia's PhysX driver on my PC. Why do you suppose it did that? Because the game is using NVidia's PhysX engine for physics calculations. But since ATI cards don't support PhysX (because PhysX is owned by NVidia), PhysX rendering falls back to the CPU. Hence the poor performance in NFS Shift on ATI cards. In other games you can turn PhysX completely off, and then it's a fair comparison, but in NFS Shift you can't. Google NFS Shift PhysX and let me know what you'll find. You'll see that I'm not BS-ing here...


Listen, the PhysX API is capable of both CPU and GPU Physx...if the developers have made the game use GPU PhysX.
NFS - Shift is ONLY using CPU PhysX:
http://www.nzone.com/object/nzone_physxgames_home.html
http://physxinfo.com/

Why don't you google yourself...:rolleyes:
 
Listen, the PhysX API is capable of both CPU and GPU Physx...if the developers have made the game use GPU PhysX.
NFS - Shift is ONLY using CPU PhysX:
http://www.nzone.com/object/nzone_physxgames_home.html
http://physxinfo.com/

Why don't you google yourself...:rolleyes:

I did google myself and I come up in a couple of places because I wrote a couple of reviews and technical papers... anyway... on another note...

If, by your own admission, NFS Shit is using NVidia's PhysX API, and the PhysX API detects a PhysX capable GPU (or even an old Ageya PhysX card), then why wouldn't it use it? I've tested NFS Shift on a ATI HD4890, on a BFG GTX 260 and on my current GTX 280. On all NVidia cards it ran considerably better because it was using hardware PhysX. What's so hard to understand? This is getting redundant... Also, the stuff on NZONE and SLIZONE (I'm giving this a s a reference) is hardly ever up to date.

Check out this Electronic Arts forum, just to prove to you that what I'm saying is true: http://forum.ea.com/eaforum/posts/list/306188.page

Here is a quote from the forum, as I'm sure that you'll be able to read the rest directly from there:

I know there are already 6 million posts about the problem with ATI cards, but I wanted to lob in my 2 cents worth. I'm gonna do it backwards for the TLDR crowd.

CONCLUSION. If you do not have an Nvidia card, geforce 8 or higher, the PhysX processing gets shunted onto your processor and maxes it out, effectively bottlenecking your system regardless of resolution or settings. You get a similar effect in Mirror's Edge if you enable PhysX without a geforce card. With car details turned down there is a small performance boost, presumably because there is less damage modelling going on.
 
I did google myself and I come up in a couple of places because I wrote a couple of reviews and technical papers... anyway... on another note...

If, by your own admission, NFS Shit is using NVidia's PhysX API, and the PhysX API detects a PhysX capable GPU (or even an old Ageya PhysX card), then why wouldn't it use it? I've tested NFS Shift on a ATI HD4890, on a BFG GTX 260 and on my current GTX 280. On all NVidia cards it ran considerably better because it was using hardware PhysX. What's so hard to understand? This is getting redundant... Also, the stuff on NZONE and SLIZONE (I'm giving this a s a reference) is hardly ever up to date.

Check out this Electronic Arts forum, just to prove to you that what I'm saying is true: http://forum.ea.com/eaforum/posts/list/306188.page

Here is a quote from the forum, as I'm sure that you'll be able to read the rest directly from there:

I don't care about people misunderstandings.
There are only hardware PhysX (GPU physics) if there are API hooks in the game for that.
Most PhysX games don't have these API hooks.

NFS - Shift is such a game...no GPU physics...even on my rig with a GTX285 card.

NVIDIA dosn't list "NFS - Shift" as a GPU PhysX game, physxinfo list it as a none GPU game...Chris Ray says the game only has software PhysX

Then I really don't care about people made up "issues"...the case is clear...it's FUD and ignorance combined :rolleyes:


EDIT:
What does my eye see:
http://www.pcgameshardware.com/aid,...engine-info-and-Full-HD-screenshots/Practice/

Need for Speed: Shift - Performance estimation
Although we wanted to deliver graphics cards benchmarks of Need for speed: Shift, we have to refrain from doing so. In our version of the game Nvidia's Geforce cards were running without problems, but AMD's Radeons were noticeably slower that usually. AMD is already aware of the problem and is working on optimizations for Shift. As soon as we receive a solution for the problems we will deliver benchmark results for graphics cards and processors.

So it's a problem in AMD's GPU drivers...but PhysX gets blamed...point proven!
 
Last edited:
I read something similar. Turns out that some get performance boost by using a Nvidia card as PPU (with the fix) while using an ATI card as main renderer.
NFS shift is not supposed to have GPU accelerated physics, but it do have PhysX, which means a wrapper and then again means you are under the mercy of Nvidia PhysX drivers. People have updated to newer PhysX drivers and performance have increased.
 
Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.

To further my point, how's this from ATI's Eric Demers:

Eric: Actually, it's not really superscalar...more like VLIW

Funnily enough the B3D review of the granddaddy R600(if you consider RV670 to be the bastard child) says it's a superscalar and I'm not sure how he missed that.Perhaps he was recently acquainted with how VLIW works and then looked them up on wikipedia and then went "Oh noes, this isn't superscalar.."

For local memory access, the shader core can load/store from a huge register file that takes up more area on the die than the ALUs for the shader core that uses it. Accesses can happen in 'scalar' fashion, one 32-bit word at a time from the application writer's point of view, which along with the capability of co-issuing 5 completely random instructions (we tested using truly terrifying auto-generated shaders) makes ATI's claims of a superscalar architecture perfectly legit.

http://www.beyond3d.com/content/reviews/16/8

So its a superscalar architecture using a VLIW implementation.:eek:
no wait, perhaps the other way round.:eek:

How about call it superscalar VLIW and be done with it?:D
 
Sadly, he's actually correct about the gtx 285 versus the 5870 thing. Whether he's on drugs or not, who knows. Some people need stuff like viagra or riddilin to help them perform better ;) so drugs might not always be a bad thing.

If you take the average of the four games apple to apple comparison's avg frame rates from this review on hardocp, it comes to about 30%:

http://hardforum.com/newreply.php?do=newreply&p=1034773843

The percentages are roughly I believe 18%, 24%, 34% and 51%. Which averages to around 32%. The problem is really Need For Speed shift which is the 18%. If ATI could fix that game, the 5870 would have a better than a 32% advantage over the GTX 285. I wish there were more games to average in this review to balance out the nfs: shift's bad performance issue.

Not sure, but I think you responded to the wrong poster as you backed up my statement to me.

I'm sure drivers will help get the % difference between the 285 and 5870 up to the 40-45% range eventually, but right now, it sits around 30%
 
Back
Top