GTX 580 rumors

Maybe you should do Economics 101; Supply & Demand?
Another Hint would be the prices for the 580, I thought supply was very limited, I guess I overestimated the demand.

LOL, right. For AMD it's always supply and demand. "They must be selling everything like hot cakes and so they can't keep up the supply". There's never any kind of problem. I'm sure even the Cayman delay isn't really a delay. They always intended it to be on December 13th or later right ? :)

Even you have to admit that it's more than ironic that they whine about NVIDIA's prices, yet they can't keep their own cards at the suggested price.
 
I'm sure even the Cayman delay isn't really a delay. They always intended it to be on December 13th or later right ? :)
No silly!

Even you have to admit that it's more than ironic that they whine about NVIDIA's prices, yet they can't keep their own cards at the suggested price.

Sorry what? I see no relevance between temporary price-cuts and price-hikes due to demand.
 
AMD says so, Neliz and others says so, please show me documentation of otherwise. Here's a link about this:
http://www.kitguru.net/components/g...e-constant-smell-of-burning-bridges-says-amd/

So your sources are AMD and neliz? Hahaha, man that's funny.

Anyway, if you think about the rasterization process you would realize how ridiculous it sounds to want sub-pixel sized triangles. Your scan-conversion and pixel shading would become so gloriously inefficient that no amount of tessellation ability will matter - on both Fermi and NI. nVidia's literature refers to adaptive tessellation, LOD scaling, view dependent complexity etc. I suggest you get your info from the source instead of being led around by the nose.
 
Sorry what? I see no relevance between price-cuts and price-hikes due to demand.

Of course you don't. It's not about relevance, but about irony though.
I also took the liberty of removing the part that came from AMD, from your post :)
 
Of course you don't. It's not about relevance, but about irony though.
I also took the liberty of removing the part that came from AMD, from your post :)

Oh sh** I forgot I have a system here with sales figures and I always ask someone random person at AMD for info!

silly me.


still no idea though..:confused:
 
So your sources are AMD and neliz? Hahaha, man that's funny.

Anyway, if you think about the rasterization process you would realize how ridiculous it sounds to want sub-pixel sized triangles. Your scan-conversion and pixel shading would become so gloriously inefficient that no amount of tessellation ability will matter - on both Fermi and NI. nVidia's literature refers to adaptive tessellation, LOD scaling, view dependent complexity etc. I suggest you get your info from the source instead of being led around by the nose.

And yours are Nvidia? Thats not funny? Picked up any good woodscrews lately? :p

Seriously though, my whole point (and the point you obviously missed in the post you quoted) is that I don't care about the whole "Nvidia pushes pointless amount of tessellation" to push cards, as long as game developers put in a slider making it possible to choose away. My gripe with tessellation implantation in games are the on/off variant. As long as I can scale it down and adjust it in conjunction with AA and other settings, I am pleased.
 
And yours are Nvidia? Thats not funny?

It's funny to get nVidia's stance on a technical matter from nVidia instead of from AMD? Are we in the twilight zone here?

My gripe with tessellation implantation in games are the on/off variant. As long as I can scale it down and adjust it in conjunction with AA and other settings, I am pleased.

That may be a possiblity once engines are fully designed around tessellation. Now, not so much. Anyway here you go. Educate yourself a bit so you won't be so easily hoodwinked in the future: http://graphics.stanford.edu/papers/fragmerging/shade_sig10.pdf. :)
 
It's funny to get nVidia's stance on a technical matter from nVidia instead of from AMD? Are we in the twilight zone here?

We must be in a twilight zone if you find Nvidia as a more credible source, while finding AMD as a funny source. ;)



That may be a possiblity once engines are fully designed around tessellation. Now, not so much. Anyway here you go. Educate yourself a bit so you won't be so easily hoodwinked in the future: http://graphics.stanford.edu/papers/fragmerging/shade_sig10.pdf. :)

And thats my only gripe with amount of tessellation. On/Off. Otherwise, as I reponded to Neliz, I don't care how much Nvidia wants as or the AMD/Nvdiia tessellation controversy, long as there are different settings. ;)
 
We must be in a twilight zone if you find Nvidia as a more credible source, while finding AMD as a funny source. ;)

I don't know if you're just being silly for kicks but who exactly is a better source for nVidia's guidelines on the use of tessellation than nVidia themselves?
 
I don't know if you're just being silly for kicks but who exactly is a better source for nVidia's guidelines on the use of tessellation than nVidia themselves?

I'm wondering if you are silly for kicks? The controversial thing in this is about games and developers, not the content of slides on nvidia.com.
 
Haha, ok man. I see where you're coming from now. Facts really are going out of style it seems.
 
Haha, ok man. I see where you're coming from now. Facts really are going out of style it seems.

Facts are what matters. This is a controversy, not about the methods described at nvidia.com or amd.com, but about what is claimed is relayed to developers for use in games. If you point to slides at nvidia.com saying that that explains all, you haven't understood the controversy part of this.

Obviously we are not going to agree here, so I am going to leave it with this. As I pointed out to Neliz, I don't care what AMD and Nvidia argues about, as long as they put a slider in games where I can select amount, I have no problems whatsoever. I would advice you to continue discussing this with Neliz if he cares more about the controversy beyond that.
 
Indeed. Anyone who equates a Stanford paper by leading engineers in the industry with "slides at nvidia.com" most certainly is more interested in drama and controversy than truth.
 
Indeed. Anyone who equates a Stanford paper by leading engineers in the industry with "slides at nvidia.com" most certainly is more interested in drama and controversy than truth.

I haven't commented the Stanford paper yet. nvidia.com refers to "who exactly is a better source for nVidia's guidelines on the use of tessellation than nVidia themselves?"

You quote me in an answer to Neliz, without bringing in any facts to support your view. When asked, you refer to a Stanford paper (showing that you don't have a clue about what the controversy is about) and then continue with something that looks more like a spin then a discussion, I see no chance on us agreeing on anything.

But whatever, spin on! :D
 
You just talk about me and tessellation because I have more curved surfaces than you can shake a stick at?
 
The most recent one is the price thing. AMD whined about NVIDIA's price cuts on the GTX 460s, that they were going to be "temporary", yet their HD 6850s and HD 6870 are actually going up in price, while the GTX 460s are at the same price or lower than the price at the time of the price cuts.

The price thing is really the funny part. And what is even more funny is that Nvidia made 89 Million $ in profits in the current 3rd quater while AMD's GPU business made a laughable 1 million. It seems like Nvidia's price cuts are hurting AMD :).

And here is my bet for the comming 6 months. Nvidia will continue to launch new products with very agressive pricing. The GTX 570 and even more the GTX560 will show that. The last one will destroy Barts as the Barts launch was to weak. To much room for the competition to whipe this product in at least one price category down.

So i expect that Nvidia's strategy with the new products, with on-time launches and very agressive pricing will lead into market share gains for them and into financially red numbers for AMD's GPU business. It's quite obvious.
 
The Barts launch was to weak. To much room for the competition to whipe this product in at least one price category down.

I'm happy nvidia continues to profit but lets be honest here. Saying the barts launch was weak is FAR from the truth. Even speculatively speaking. The barts were a massive success. Here's why;

1. smaller more efficient chips over last gen which costs amd LESS than their 5700 parts they are replacing
2. They are much better than their intended competition
a 6870 vs GTX 460 1GB (a slaughter)
b 6850 vs GTX 460 768MB (a slaughter)
3. They ushered in a new feature that will go down in history as one of the best things to come in a long time. Post Process anti aliasing MLAA. I'm a neutral guy who usually leans towards nvidia and in most scenarios roots for the underdog, however even I want MLAA or some form of it on my GPU's

Oh and imagine if nvidia wouldn't have moved the GTX 470 down to a completely lower price segment, I'm sure they didn't plan to but their hand was forced by the amazing thing that is Barts.
 
Last edited:
Meanwhile, back to ....

This seems like a reasonably fair investigation of the HAWX2 tessellation controversy
http://techreport.com/articles.x/19934/8

Ubisoft took suggestions from both NV & AMD
For its part, Nvidia freely acknowledged the validity of some of our criticisms claimed the game's developers had the final say on what went into it. Ubisoft, they said, took some suggestions from both AMD and Nvidia, but refused some suggestions from them both.

Developer says game was developed to have playable frame-rates on AMD hardware, with an estimated polygon size of 18 pixels
On the key issue of whether the polygon counts are excessive, Nvidia contends Ubisoft didn't sabotage its game's performance on Radeon hardware. Instead, the developers set their polygon budget to allow playable frame rates on AMD GPUs. In fact, Nvidia estimates HAWX 2 with tessellation averages about 18 pixels per polygon. Interestingly, that's just above the 16 pixel/polygon limit that AMD Graphics CTO Eric Demers argued, at the Radeon HD 6800 series press day, is the smallest optimal polygon size on any conventional, quad-based GPU architecture.

Looking at this picture (which I'm guessing is resized from 2560x1600), you can see
1) although the polys are small, they're certainly not sub-pixel
2) there is LOD going on
- this is actually where TechReport are incorrect in their analysis
- the LOD is varied across the image to keep the polys roughly at the same size

http://techreport.com/image.x/geforce-gtx-580/hawx2-wireframe.jpg

So, as I said before, AMD is basically just moaning because NV has better tessellation h/w than they do
- and then making up some nonesense about sub-pixel polys killing performance on AMD h/w
:rolleyes:
 
Last edited:
I haven't commented the Stanford paper yet. nvidia.com refers to "who exactly is a better source for nVidia's guidelines on the use of tessellation than nVidia themselves?"

You quote me in an answer to Neliz, without bringing in any facts to support your view. When asked, you refer to a Stanford paper (showing that you don't have a clue about what the controversy is about) and then continue with something that looks more like a spin then a discussion, I see no chance on us agreeing on anything.

But whatever, spin on! :D


Subpixel size triangles are nice if you want to decrease aliasing, Thats the main issue with tessellation, aliasing. I have worked with tesselators (software) for close to 10 years now so I'm pretty comfortable in saying AMD's talk about it is rediculous, because the goal is to increase polygon counts but also maintain image quality and to do that you need to tessellate enough so alaising doesn't effect the end image. Just because AMD's cards can't do it right now doesn't mean its not the future, it damn well is.
 
Subpixel size triangles are nice if you want to decrease aliasing, Thats the main issue with tessellation, aliasing. I have worked with tesselators (software) for close to 10 years now so I'm pretty comfortable in saying AMD's talk about it is rediculous, because the goal is to increase polygon counts but also maintain image quality and to do that you need to tessellate enough so alaising doesn't effect the end image. Just because AMD's cards can't do it right now doesn't mean its not the future, it damn well is.

I have been gaming for 30 years now and seen a lot of features being put in games that has too high performance impact, so its reserved to a few. In many of those cases there either are or its possible to scale it so others can benefit from it too. Most gamers don't have highend cards and an on/off feature that is useless for most gamers, is overdone.

This includes tessellation, so if you who have worked with tessellation for 10 years have anything to do with the use of tessellation, please don't make it a useless feature as one can see Advanced Depth of Field in Metro 2033 is for most people.

My gripe with tessellaton is that they either try to find a sweetspot so most people can enjoy it, or make it scalable instead of an on/off feature.
 
I think its fair to have a slider have max tesselation factors, but it depends on the developer, but the adaptive algorithm is made for increasing performance for the tessellation proccess. Also this will introduce issues with image quality because of aliassing, So really its a trade off, if you limit you get increased alaising.
 
I think its fair to have a slider have max tesselation factors, but it depends on the developer, but the adaptive algorithm is made for increasing performance for the tessellation proccess. Also this will introduce issues with image quality because of aliassing, So really its a trade off, if you limit you get increased alaising.

I agree, user selectable LOD scalability would be a good thing
- and should be trivial for developers to add

Most current Tessellation benches like Heaven & Stone Giant have a few different settings for this anyway.
 
I think its fair to have a slider have max tesselation factors, but it depends on the developer, but the adaptive algorithm is made for increasing performance for the tessellation proccess. Also this will introduce issues with image quality because of aliassing, So really its a trade off, if you limit you get increased alaising.

A slider would solve the problem for gamers. Doesn't matter if you get increased aliasing with lower settings if higher settings means you have to turn off tessellation anyway (or you can turn on tessellation, but have to turn off MSAA) and it becomes a useless to most gamers. :)
 
Last edited:
A slider would solve the problem for gamers. Doesn't matter if you get increased aliasing with lower settings if higher settings means you have to turn off tessellation anyway (or you can turn on tessellation, but have to turn off MSAA) and it becomes a useless to most gamers. :)

It's not really aliasing that's the problem with too little Tessellation
- it's just that objects and landscapes will tend to look more 'polygony'
- we all know what low-poly models used to look like.... so, too little Tesselation will look more like that
- plus, I think there would need to be a certain minimum amount of tessellation if developers are going to move away from Parallax correction on their Bump maps...
 
Last edited:
Subpixel size triangles are nice if you want to decrease aliasing, Thats the main issue with tessellation, aliasing. I have worked with tesselators (software) for close to 10 years now so I'm pretty comfortable in saying AMD's talk about it is rediculous, because the goal is to increase polygon counts but also maintain image quality and to do that you need to tessellate enough so alaising doesn't effect the end image. Just because AMD's cards can't do it right now doesn't mean its not the future, it damn well is.

Yeah, I'm not sure tessellation helps with aliasing
- but it helps make objects look less blocky

And I think the goal of tessellation should be to eliminate the need parallax correction on the Bump maps
- which will help reduce the work-load on the pixel shaders.
- which ought to be a net gain both in terms of visual quality, and in terms of processing speed
- provided, that is, the tessellators can handle the polygon workload
;)
 
Back
Top