Fermi is going to change gaming forever

ATI 8500: First with Tessellation
ATI 3xxx \series: First with DX 10.1
ATI 5xxx series: First with DX 11, bonus Eyefinity if you want it
NVidia 8xxx series: First with DX 10m bonus CUDA (later PhysX)
Nvidia 2xx series; First with absolutely nothing
Nvidia Fermi: First with absolutely nothing - finally has tessllation! finally has DX 10.1! Finally has DX11!

Yet Nvidia is the one pushing boundries? Guffaws all around! :rolleyes:

did you read what i wrote. I didn't say ati wasn't innovative, I said the 58xx wasn't innovative and it isnt. being directx11 isn't innovative. nothing about the card is innovative.

Fermi other the hand, pushes bounderies in almost every area of gpu design. you would have to blind not to see the technological leap that fermi is over the 58XX.

the 58xx is a nice card, and certianly if you want to play games really fast, at a good price, its a great card. and by that metric it is a sucessful card, much like the phenom was a good cpu.

However the point of this thread was the pushing of boundries, not being effeceint. and in that i do not think there is any doubt that fermi is going to have a massive impact on GPU design and function going foward, much like it was obvious the pentium m would become the core2.
 
So Ati loyalists, what do you have to fear Fermi for? It's either going to come and beat the 5870/5970, tie it unremarkably, or just downright suck and come with too much power requirements and a nice feature set, but an unremakable performance (think Geforce FX 5XXX series). Either way, a true gamer, a true enthusiest, and a true multi company supporter, will greet Fermi with open arms. All others are loyalists and will reap what they sew.

1. What anyone says on this forum or any other will not affect the capabilities Fermi one iota = 'All others are loyalists and will reap what they se(o)w' = nonsensical drivel.

2. Some of us aren't into waving magic wands before, during and after they post, but prefer to assess and argue the situation based on available facts/best guesses based on available facts, the laws of physics and logic, letting the 'chips' fall where they may. If that means Fermi (or whatever) comes out smelling like a week old dead fish on cost/performance, that's simply the hard fact/best guess based reality extant at that time.
 
did you read what i wrote. I didn't say ati wasn't innovative, I said the 58xx wasn't innovative and it isnt. being directx11 isn't innovative. nothing about the card is innovative.

Fermi other the hand, pushes bounderies in almost every area of gpu design. you would have to blind not to see the technological leap that fermi is over the 58XX.

the 58xx is a nice card, and certianly if you want to play games really fast, at a good price, its a great card. and by that metric it is a sucessful card, much like the phenom was a good cpu.

However the point of this thread was the pushing of boundries, not being effeceint. and in that i do not think there is any doubt that fermi is going to have a massive impact on GPU design and function going foward, much like it was obvious the pentium m would become the core2.

dx 10.1 yeasr before nvidia . Dx 11 6months + before nvidia with eye infinty on a single board.

Fermi is not innovative. It brings nothing but dx 11 to the table and its very very big. Of couse you could think that it has some magical powers from god that lets it do its tesselation faster than ati. However most people know its because the fermi is almost 1b tranistors bigger than ati's part.

Ati's next gpu the northern islands comes out later this year and will be on 28nm . Since ati can cram even more tranistors on 28nm than nvidia can on 40nm yo ucan be it will be a much better performing part than fermi .

THe other factor you neglect to mention is that all of ati's 5x00 series card have the same tesselation unit. Nvidia's is based on software and as you remove shader units it will cause the tesselation unit to slow down.


But you guys will see that yourselves in a month.
 
Really??? That's your contribution to the thread?


Actually you can read all my posts they are al lwell thought out. But we continue to get those who say a 6 month late part that does less than the 6 month older part from another company is the actual innovative one.


Ati has eye infinty on a single card. Nvidia doesn't , it requires a very expensive dual card set up.

Ati has dx 11 from $ 40 cards all the way up to $600 cards. Currently nvidia doesn't .

Ati has already shipped more than 2m dx 11 parts and every day more ship .


Ati has the innovative part just like they did with dx 10.1. Thankfully nvidia put out a dx 11 part because if they didn't they would do everything they could to kill it just like they did with dx 10.1
 
It is not the trees that are expensive so much as the leaves and the shadows they cast that make them look real.

good example:
tree.gif


tree_bare.jpg
tree_leaves.jpg


These are both the same tree, the leaves are simple flat transparent planes that cast leave shaped shadows. The tree is 17,000 and speed-tree could probably do a better job cheaper, so it is the leaves that are expensive being 250,000 some odd polygons, but without the shadows cast on the tree is meh.

The smart thing would have been to bake the cast shadows from the leaves, and found ways to combine leaves and more simple objects or normal maps, but normal maps are a way of getting more geometry for cheap. So developers what more geometry cheap, since normal maps take up texture memory but it is all about trade offs, and target audiences.

I do think if the ray tracing is real that will impact games as soon as it can be rolled into UE3 as shiny sells games. Good games have good foundations but when was the last time you saw an ugly game do well?
 
However the point of this thread was the pushing of boundries, not being effeceint. and in that i do not think there is any doubt that fermi is going to have a massive impact on GPU design and function going foward, much like it was obvious the pentium m would become the core2.

1. Fermi is not going to 'change gaming forever' if it gets little traction or runs a distant second to the 5000 series in the GPU market.

2. As microsoft spokespersons said outright during CES the Xbox is no more than halfway through it's lifecycle, the current cost/performance champion 5000 series, with refreshes, can hold down the fort into the foreseeable future, giving AMD more than ample time to get their brand new architectured 6000 series hardware and software bugs worked out before release.

3. I suspect the 6000 series/Bulldozer/Fusion is going to be the real game changer, there's no getting around the fact Nvidia is missing a critical component in the cpu/gpu fusion arena.
 
Last edited:
Different doesn't always mean better. With the development time it takes to come out with a good game, it can be quite a while before we see these "new features" actually put to use in anything other than nVIdia tech demos.

There are some folks (such as yourself presumably) who discount the paper specs and prefer to wait until they see benchmark numbers and can compare fps. There's definitely nothing wrong with that and it makes sense to do so if that's the only thing you're interested in. However, there are others who can appreciate lower level architectural details and the new things that they enable. My post was a response to that hilariously incomplete list of architectural firsts posted earlier.
 
There are some folks (such as yourself presumably) who discount the paper specs and prefer to wait until they see benchmark numbers and can compare fps. There's definitely nothing wrong with that and it makes sense to do so if that's the only thing you're interested in. However, there are others who can appreciate lower level architectural details and the new things that they enable. My post was a response to that hilariously incomplete list of architectural firsts posted earlier.

The problem is that while fermi brings new things inside the gpu , it hardly does anything for us unless it reduces the price while driving p performance.


As it is the fermi is a very big chips pushing whats possible on 40nm and its paying for it. If it launched on 32nm i'm sure it would have destroyed the ati cards however that hasn't happened and it looks like it will be at best on par with what ati is offering while being much bigger.

There is a site if you want to discuss thsoe details. Its www.beyond3d.com
 
1. Fermi is not going to 'change gaming forever' if it gets little traction or runs a distant second to the 5000 series in the GPU market.
I agree fermi is very late and ati has a top to bottom dx 11 line up which from what i've been reading nvidia wont have till the end of the year. So ati will push many more dx 11 cards this year than nvidia.


2. As microsoft spokespersons said outright during CES the Xbox is no more than halfway through it's lifecycle, the current cost/performance champion 5000 series, with refreshes, can hold down the fort into the foreseeable future, giving AMD more than ample time to get their brand new architectured 6000 series hardware and software bugs worked out before release.
That doesn't matter. The ps1 was replaced 6 years into its life but sold for another 5 years. The ps2 is still selling strongly dispite being replaced 7 years into its life. The next xbox will come. Mabye 2012.

But fermi wouldn't work in a 2011 xbox. Its way to big and hot. MS wants a well rounded chip. Its why they went with the xenos instead of an nvidia 7900 chip. On paper the 7900 is a monster compared to the xenos. But the xenos is the better chip in the end.

3. I suspect the 6000 series/Bulldozer/Fusion is going to be the real game changer, there's no getting around the fact Nvidia is missing a critical component in the cpu/gpu fusion arena.

Fusion will change laptop gaming forever. It apprently has a 4650 built into the cpu. Which means laptops will have good gaming performance out of the box.

The 6000 or northern islands should launch on 28 nm which means they can at least double their tranistors and keep the same die count. Perhaps even tripple. It will be a big boost from both the 5000 series and the fermi series. Nvidia should keep fermi around on 32nm or 28 nm to compete. So it will be interesting
 
kllrnohj, your GPU history is sorely lacking. How about first with 32-bit, first with hardware T&L, first with a cross-bar memory bus, first with programmable shaders, first with SM3.0, first with unified shaders (on the pc), first with a proper general compute implementation...etc, etc.

No it isn't. Had you bothered to read the post I was quoting, you'll note that he was talking about how ATI's new direction means they aren't innovative. Thus, I was pointing out that the real situation is the exact opposite of what he was claiming. Nvidia being innovative 5 years ago doesn't make them innovative now, now does it? Also, I DID mention GPGPU, thank you very much.

Fermi brings nothing new from a graphics standpoint because features are defined by Microsoft - duh. The only thing the IHV's can differentiate on is IQ and performance. Nvidia is tackling the first one with coverage sampling support for alpha textures and faster sparse sampling for soft shadow mapping. Performance is yet to be seen.

Architecturally however, Fermi brings a lot of firsts to GPUs. Just having a full coherent read/write memory hierarchy is a big deal for GPUs. There's also the big overhaul in the geometry pipeline. Some people don't really care about that stuff but for those that do, there are a lot of big firsts in Fermi that shed some light on the future of GPU architectures as they move closer to the flexibility and programmability of CPUs.

Except this thread isn't about any of that. This thread is about how fermi will change *gaming* forever, which you just agreed that it won't.

did you read what i wrote. I didn't say ati wasn't innovative, I said the 58xx wasn't innovative and it isnt. being directx11 isn't innovative. nothing about the card is innovative.

Fermi other the hand, pushes bounderies in almost every area of gpu design. you would have to blind not to see the technological leap that fermi is over the 58XX.

the 58xx is a nice card, and certianly if you want to play games really fast, at a good price, its a great card. and by that metric it is a sucessful card, much like the phenom was a good cpu.

However the point of this thread was the pushing of boundries, not being effeceint. and in that i do not think there is any doubt that fermi is going to have a massive impact on GPU design and function going foward, much like it was obvious the pentium m would become the core2.

*sigh*

Its clear you refuse to listen to reason, so I'm done responding.
 
So now what happens is a dev programs their dx 9 path for consoles and a dx 11 path for pcs. the direct x path can dynamicly shut off features so it runs properly on dx 10 10.1 cards. So devs have very little extra coding to support all 3 major graphics ip's for two big markets

Why would game developers do what you're saying?

It costs more money to develop these effects for the PC, it's easier and cheaper to just develop one version for all the platforms. The problem is that the PC sales for most games are going to be dwarfed by the console sales, while this is the case making improvements for the PC version in general is going to be seen as a waste of money.

On top of that only a small fraction of the PC audience has even got DX11 hardware and of that tiny audience an even smaller slice is going to be able to run DX11 effects. Of that even smaller slice only a fraction of them would see DX11 effects as a "deal maker", that is to say they'd go from being on the fence to actually buying the game because of DX11 support.

The idea of dynamically shutting off features is inherent within all versions of DX, if the hardware doesn't support the intended version of DX then the game uses fallback effects from previous versions of DX. Every version of DX has been like this, DX11 is no different in this regard.
 
Why would game developers do what you're saying?

It costs more money to develop these effects for the PC, it's easier and cheaper to just develop one version for all the platforms. The problem is that the PC sales for most games are going to be dwarfed by the console sales, while this is the case making improvements for the PC version in general is going to be seen as a waste of money.

On top of that only a small fraction of the PC audience has even got DX11 hardware and of that tiny audience an even smaller slice is going to be able to run DX11 effects. Of that even smaller slice only a fraction of them would see DX11 effects as a "deal maker", that is to say they'd go from being on the fence to actually buying the game because of DX11 support.

The idea of dynamically shutting off features is inherent within all versions of DX, if the hardware doesn't support the intended version of DX then the game uses fallback effects from previous versions of DX. Every version of DX has been like this, DX11 is no different in this regard.


One of the biggest problems going into the ps3 and 360 era for japanese publishers was the lack of knowledge on how gpus perform. THey were all use to ps2 and its development process.

By getting to know dx 11 which will form the future graphics chips in next gen consoles they will get a leg up by already moving their code and projects to a dx 11 code base.

Your also missing an important part of dx 11. MS made it very easy for the program to down grade based on what profile of dx 11 there is. Dx 11 has dx 10 and 10.1 as profiles. So when you code for a game using compute shaders and tessilation when the program sees there is a dx 11 profile 10.1 card it will adjust accordingly .

I'll try to find it but I believe giant bomb had a bunch of game devs on one of their podcasts where developers talked about upcoming dx 11 projects and how quickly developers will embrace it because of the above and other senarios.

dx 11 is not dx 10 its not hampred by what dx 10 was hampred with.

dx 10 on steam accounts for 76% of graphics cards. However only 48% of systems are running dx 10 platforms (vista or win 7)

http://store.steampowered.com/hwsurvey/videocard/
http://store.steampowered.com/hwsurvey/

DX 11 never has to worry about this because it works on vista and win 7 unlike dx 10 which only worked on vista and not xp. Add to the fact that its very simple to support dx 10 through dx 11 and you have a perfect storm that you will see the full effects of later this year.
 
Why would game developers do what you're saying?

It costs more money to develop these effects for the PC, it's easier and cheaper to just develop one version for all the platforms. The problem is that the PC sales for most games are going to be dwarfed by the console sales, while this is the case making improvements for the PC version in general is going to be seen as a waste of money.

On top of that only a small fraction of the PC audience has even got DX11 hardware and of that tiny audience an even smaller slice is going to be able to run DX11 effects. Of that even smaller slice only a fraction of them would see DX11 effects as a "deal maker", that is to say they'd go from being on the fence to actually buying the game because of DX11 support.

The idea of dynamically shutting off features is inherent within all versions of DX, if the hardware doesn't support the intended version of DX then the game uses fallback effects from previous versions of DX. Every version of DX has been like this, DX11 is no different in this regard.

Exactly... and self-proclaimed experts such as pasta4u don't seem to understand that it's the asset creation that is far more prohibitive than a few lines of code to add a fallback. DX programming is very similar regardless of version... there's no magical learning cuve for us devs to "practice" at doing better to add the DX11 effects like you act.
 
That doesn't matter. The ps1 was replaced 6 years into its life but sold for another 5 years. The ps2 is still selling strongly dispite being replaced 7 years into its life. The next xbox will come. Mabye 2012.

The past is a poor guide to the present on this. Microsoft and Sony both intend to suck down some profits on their hardware for a while now that they are making a profit on it after years of losing money on each one they sold. NOBODY in the business is in a hurry to change, the publishers LOVE developing games on a well greased (decreasing cost) pathway. Natal will open up new markets with Microsoft doing the heavy lifting on the programming side, the developers will be able to incorporate it with minimal additional expense.

EVERYBODY is moving into a maximized profit situation and want to milk it as long as possible.

Don't look for the next generation consoles from Microsoft or Sony until at least Xmas 2013.

On the positive side, when the new console generation does appear, they are going to have insane capability at 1080p, which is going to be the dominant resolution for a very long time.
 
I am so disappointed in you all. Nothing but trolls in this thread. Stay on topic. The topic is Fermi. Please Ati guys just leave this thread alone. It's obvious you are sucking on their titties and nothing Fermi could dream of doing is acceptable to you. It's such blatant fanboyism. Fermi could come and deliver 150 fps 32X AA and you would still nit pick it.
 
I am so disappointed in you all. Nothing but trolls in this thread. Stay on topic. The topic is Fermi. Please Ati guys just leave this thread alone. It's obvious you are sucking on their titties and nothing Fermi could dream of doing is acceptable to you. It's such blatant fanboyism. Fermi could come and deliver 150 fps 32X AA and you would still nit pick it.

Sorry AMD fans ruined your circle jerk. Fermi will push boundries and will change gaming forever forever.. forever... forever ...(echoes). GT360 will also own the 5970.

/cosign my friend.
 
Last edited:
I am so disappointed in you all. Nothing but trolls in this thread. Stay on topic. The topic is Fermi. Please Ati guys just leave this thread alone. It's obvious you are sucking on their titties and nothing Fermi could dream of doing is acceptable to you. It's such blatant fanboyism. Fermi could come and deliver 150 fps 32X AA and you would still nit pick it.

This whole thread is a troll topic.


You have never told us why Fermi is going to change gaming. All it is , is a faster gpu. We get faster gpus every 6-8 months for the last two decades almost. What do yo uwant us to do jump for joy that we are getting yet another one ?

I've already pointed out that northern islands is coming out in the 2nd half of this year and it will be on 28nm. its from ati and chances are since its not only on a new process node but an optical shrink of 32nm it wll be faster than Fermi will be on 40nm. So will we have yet another game changing gpu when that comes out ?
 
Look when i posted it i meant to write "IMHO, i think Fermi has the capability to change gaming". I know it looks troll like but it wasn't meant that way. I have no idea if it will, none of us do. Maybe alittle in some ways. But it should be faster than 5XXX series if it wants to succeed. And its geometry engine is something that has the potential to change gaming. Even Brent says it has him excited.
 
Look when i posted it i meant to write "IMHO, i think Fermi has the capability to change gaming". I know it looks troll like but it wasn't meant that way. I have no idea if it will, none of us do. Maybe alittle in some ways.

But you have yet to tell us why.

What does fermi do that the ati parts don't ? The only thing it might do is be faster (though it isn't out yet)

So what does it bring to the table.

If you said dx 11 is going to change gaming forever I'd agree with you.

If you said Ati having a top to bottom dx 11 line up with in 5 months of dx 11 launching i'd agree with you. Esp since low end dx 10 cards didn't come out for awhile.
 
" NVIDIA also hasn’t forgotten about image quality and has improved CSAA IQ notably. NVIDIA will now offer a 32X CSAA mode, which uses 8x Color Samples and 24x Coverage Samples. Think of this as 8X MSAA on steroids, it should mean that there won’t be a huge drop in performance using 32X CSAA compared to 8X MSAAand this is certainly something we will test. On paper, 32X CSAA should be actually playable when 8X MSAA is in-game. There have also been improvements to Transparency AA (Alpha to Coverage.) NVIDIA also discussed that there shouldn’t be a large drop in performance using 8X MSAA compared to 4X MSAA as we saw with the GT200 thanks to performance improvements."

Maybe better image quality?


" The GF100 should accelerate geometry faster than any other GPU known to date throughout the rendering pipeline, all the way from Triangle Setup to Geometry Shading to Tessellation to Rasterizing. If NVIDIA’s investment in its geometry engine proves correct, the GF100 could be substantially faster than the AMD Radeon HD 5000 series when it comes to things like DX11 Tessellation; one of the Radeon HD 5000 series main selling points right now. This is all theoretical of course until we actual test the GF100’s performance in games. "

Potential for faster geometry manipulation, thus giving devs more power to give us better looking models for once?
 
" NVIDIA also hasn’t forgotten about image quality and has improved CSAA IQ notably. NVIDIA will now offer a 32X CSAA mode, which uses 8x Color Samples and 24x Coverage Samples. Think of this as 8X MSAA on steroids, it should mean that there won’t be a huge drop in performance using 32X CSAA compared to 8X MSAAand this is certainly something we will test. On paper, 32X CSAA should be actually playable when 8X MSAA is in-game. There have also been improvements to Transparency AA (Alpha to Coverage.) NVIDIA also discussed that there shouldn’t be a large drop in performance using 8X MSAA compared to 4X MSAA as we saw with the GT200 thanks to performance improvements."

Maybe better image quality?


" The GF100 should accelerate geometry faster than any other GPU known to date throughout the rendering pipeline, all the way from Triangle Setup to Geometry Shading to Tessellation to Rasterizing. If NVIDIA’s investment in its geometry engine proves correct, the GF100 could be substantially faster than the AMD Radeon HD 5000 series when it comes to things like DX11 Tessellation; one of the Radeon HD 5000 series main selling points right now. This is all theoretical of course until we actual test the GF100’s performance in games. "

Potential for faster geometry manipulation, thus giving devs more power to give us better looking models for once?

You misread that quote. In the context of the first paragraph, the author is basically saying better AA for Nvidia specifically they improved their own AA immensely. NOTHING more nothing less.

The second paragraph is saying Tessellation thus far is potentially better than 5 series offering. AMD with so much headroom on their chips can easily add in a bunch of stuff with the refresh to improve tessellation - it can become moot very easily.


So is that what this entire topic thread was about? Your misunderstanding of the Fermi previews? Did I miss anything?
 
No, better image quality and faster geometry processing. It's right there in front of you.

Nvidia and Ati are very very much tied image quality wise, but any improvements done to Nvidia can potentially give them the leg up. Just food for thought.
 
I am so disappointed in you all. Nothing but trolls in this thread. Stay on topic. The topic is Fermi. Please Ati guys just leave this thread alone. It's obvious you are sucking on their titties and nothing Fermi could dream of doing is acceptable to you. It's such blatant fanboyism. Fermi could come and deliver 150 fps 32X AA and you would still nit pick it.

Whenever I see a post like this, I immediately chuckle. Why? The only people that make these personal attacks concerning the prospect that another person is a fanboy, (especially when they are simply refuting the argument with factual evidence and not irrational brand praise) are the trolliest fanboys of all.

How silly.
 
No, better image quality and faster geometry processing. It's right there in front of you.

Nvidia and Ati are very very much tied image quality wise, but any improvements done to Nvidia can potentially give them the leg up. Just food for thought.

By the time tessellation is used so much that it will really matter who does it better we will be using cards more powerful than the 5xxx and GF1xx series so its really a moot point at the moment. DX11 won't be widely adopted for a while. Not until a lot more mainstream DX11 cards sell.

Higher levels of AA won't "change gaming forever". You're not going to see developers program 32x CSAA into their games very much so its likely going to end up being like Edge-detect AA on ATI cards and being relegated to driver options.

Now if you had brought up something like 3D Surround (providing it works correctly) you would have something. Even though it'll be supported by older cards, two GF100s running three monitors and 3D Vision will probably be a nice smooth experience (assuming there is good driver support).
 
By the time tessellation is used so much that it will really matter who does it better we will be using cards more powerful than the 5xxx and GF1xx series so its really a moot point at the moment. DX11 won't be widely adopted for a while. Not until a lot more mainstream DX11 cards sell.

Higher levels of AA won't "change gaming forever". You're not going to see developers program 32x CSAA into their games very much so its likely going to end up being like Edge-detect AA on ATI cards and being relegated to driver options.

Now if you had brought up something like 3D Surround (providing it works correctly) you would have something. Even though it'll be supported by older cards, two GF100s running three monitors and 3D Vision will probably be a nice smooth experience (assuming there is good driver support).


LOL then if DX11, Ati's big draw for the 5870 won't be utilized or 'wideley adopted' for a while, what is YOUR draw on the 5870? What is the big draw of it? It's just faster than and you can hook up a couple monitors.
 
Ok look first off, I love both Ati and Nvidia. I currently have a 5870 oc'ed. I love it. But reading up on the Fermi, I cannot help but come to the conclusion it is going to change gaming. I got on Stalker Clear Sky last night and played alittle. I then came upon some trees. The trees are a disgusting mess of a polygon. A GREAT looking game marred by small subtle disasters like that. The trees are so laughably unremarkable it isn't funny. The Fermi is going to push for geometry and polygons. Tesselation. We need more of this and we need it asap. I believe nvidia knows this. We are at a point when we have all the effects, all the shaders to do so, and tons of power, but nothing is advancing graphically like it should be. Games still have a crisp look on pc, very good aa / af and clear distances, but the geometry is not where it should be. When I come to a tree or a character or a building in a 2009 videogame and it looks almost as bad as a tree in a game from 99, there is something wrong. This is where Nvidia comes in. Forget eyefinity, forget it all. Geometry is what it is going to all be about in the coming years. This will get us one more step next to photo realistic gaming. I fully intend to buy a fermi and test it. But will I sell my 5870 right now to do so? Not even close. I love this card. But I do not doubt that fermi, however late it may be, is going to put us on the path to changing gaming forever. And this is coming from a guy thats been doing this way over a decade, using cards from 3dfx, ati, nvidia, rendition, intel, you name it.

****Just my opinion****

only game developers can change the gaming world.
 
LOL then if DX11, Ati's big draw for the 5870 won't be utilized or 'wideley adopted' for a while, what is YOUR draw on the 5870? What is the big draw of it? It's just faster than and you can hook up a couple monitors.

Lets See

5000 series: eyefinity, DX11, tessalation, 3d Glasses

300 series: eyefinity, DX11, Tessalation, 3d Glasses

What exactly is Nvidia bringing in that is going to change gaming forever?....10-20% faster then a 5870 is NOT going to change gaming forever.

Nvidia is late to the game. Will it be a badass product? We can only hope. Competition is good, and lowers prices.

Changing gaming forever?....no 3dFX changed PC gaming forever
 
No, better image quality and faster geometry processing. It's right there in front of you.

Nvidia and Ati are very very much tied image quality wise, but any improvements done to Nvidia can potentially give them the leg up. Just food for thought.

Sigh..its just Nvidia's new way of handling the tessellation. You're reading too much into the marketing speak. Learn to read between the lines. Second time again repeat : It does not mean better image quality for the entire video card industry. Author is talking about Nvidia's own AA. AMD had the leg up on Nvidia in the AA dept historically.
 
Ati is utilizing tessellation via DX11. It does not have a parallel engine made specifically for geometry power. TRY AGAIN.
 
Ati is utilizing tessellation via DX11. It does not have a parallel engine made specifically for geometry power. TRY AGAIN.


So do you have recent benchmarks from a review showing the difference in performance comparing ATI Tessalation to Nvidias?

I dont see one on Hardocp.com Anandtech.com Techreport.com Guru3d.com....

Please link me benchmarks showing the difference.
 
You know there is no benches on this yet. When they come, we'll know. But it's common knowledge that the fermi seems to be running Ati's own benchmark faster than the 5XXX series.
 
LOL then if DX11, Ati's big draw for the 5870 won't be utilized or 'wideley adopted' for a while, what is YOUR draw on the 5870? What is the big draw of it? It's just faster than and you can hook up a couple monitors.

Eyefinity, price/performance. My 5850 was a very viable upgrade from SLI 9800 GTXs and I'm getting a 5870 with an Eyefinity set up so I can take good advantage of it. There is nothing wrong with a card simply being evolutionary. I'm not saying the GF100 will be a bad card, in fact I fully expect it to be great. I also fully expect it to not offer me anything compelling over the 5870 I'll be getting as I would need two of them to take advantage of the tri-monitor set-up and likely a grand worth a video cards is WAY more than I'd ever spend.

DX11 isn't something I expect to be a big deal until next year at least. Dirt 2 barely uses it, the DX11 option in the CoP benchmark was far from impressive. I'm not expecting much from AvP or BC2. And considering Crytek's so called DX10 "support" on the first Crysis I'm not expecting real DX11 support from them for its sequel.
 
You know there is no benches on this yet. When they come, we'll know.

Then stop saying Fermi is going to change gaming forever. It brings nothing new to the table that isnt out in the open market already.

I am no Fanboi of either product. I go with whatever is top notch and good price/performance.

But saying Changing gaming forver is a Fanboi comment bro. And I too want Fermi out as bad as anyone else.

When you game High-end You want the best. And come march we will see who is the best.

P.S. Nvidia showing benchmarks isnt common knowledge. I do not believe Nvidia or ATI PR's I want proof. Wait for reviews before saying (common knowledge)
 
I stated already i didnt mean to imply it will DEFINITELY change gaming. I said i feel it MAY.

And please get off the eyefinity crap. All you ati fanboys hang on to that eyefinity stuff. It's a cool feature but it's certainly not something I want. I mean even if I was given it for free, I'd have to find the space for all the shit. LOL It'll gain traction and may even be a very viable thing in a year or two, but for now, it's very niche.
 
You know there is no benches on this yet. When they come, we'll know. But it's common knowledge that the fermi seems to be running Ati's own benchmark faster than the 5XXX series.

A cherry picked part of the benchmark, mind you. I would have loved to see the rest of the benchmark, plus full system specs for both test systems. Not that I'm doubting the benchmark, but when companies cherry pick it always makes me curious why they cherry picked exactly that as it would prove a better example to show the entire run if its so much better.
 
Back
Top