So Long CPU Wars

And we'll all be buying them as we will no longer be able to afford Intel... :(
 
That article didn't say that AMD is going to quit making CPUs. It said that they are changing their marketing strategy and concentrating more on graphics chips.

And we'll all be buying them as we will no longer be able to afford Intel... :(

Really, I wish people would quit saying stuff like this. Intel isn't going to start price gouging just because AMD has decided to focus more of their attention on another market.
 
I was under the impression that the "Vision" branding was for note/netbooks. Interesting.
 
That article didn't say that AMD is going to quit making CPUs. It said that they are changing their marketing strategy and concentrating more on graphics chips.



Really, I wish people would quit saying stuff like this. Intel isn't going to start price gouging just because AMD has decided to focus more of their attention on another market.

he had a point, intel cpu's always start at $1000 and arnt ever THAT fairly priced, amd has always been the budget boy and has now shifted its view to improving the price per performance, this is good, people were buying these chips because they were cheap for the performance, not because they were the best, intel is for that, but with amd NOT trying to make fast chips they can focus on improving there manufacturing process and helping people save more money, this will help there sales and keep them in the business, hopefully they will pick up on the cpu production but i for one belive that computers will get to the point that the gpu does all the work and the cpu will be built into the motherboard like northbridge and southbridge are now
 
Weird...

Seems purley to be a marketing scheme.... It has bee a long time since AMD Mhz = Intel Mhz anyway.

It may work out better for them though as the newer Intel based general user boxes are very poorly marked in repect to what they can actually do well.
 
he had a point, intel cpu's always start at $1000 and arnt ever THAT fairly priced, amd has always been the budget boy and has now shifted its view to improving the price per performance, this is good, people were buying these chips because they were cheap for the performance, not because they were the best, intel is for that, but with amd NOT trying to make fast chips they can focus on improving there manufacturing process and helping people save more money, this will help there sales and keep them in the business, hopefully they will pick up on the cpu production but i for one belive that computers will get to the point that the gpu does all the work and the cpu will be built into the motherboard like northbridge and southbridge are now

Intel releases their best processors in the form of the Extreme Editions. This much is true. However, Extreme Editions are released with the rest of the lineup or they are released as additions to an existing product line. They are never the "start of the line." The Core 2 Duo, Core 2 Quad, Core i7 900 series, Core i5 / Core i7 800 series CPUs are available in a variety of price ranges that many people would consider more than reasonable.

The first week they were available the Core 2 Duo E6300 was only $183.00. Given that the processor was better than the anything in the Pentium D lineup I'd say that's damned reasonable. The Core 2 Quad Q6600 ended up in the realm of $266 even when almost nothing challenged it other than more expensive Intel CPUs. The E8400 was another damned powerful processor at a reasonable price. The Core i7 920 was less than $300 at launch or right there abouts. Pretty reasonable given the performance that it offers. At the launch of most of these CPUs AMD had nothing that matched their offerings. Still the prices remained pretty reasonable.

Let's not forget that the really expensive CPUs only make up a small portion of Intel's business in the consumer marketplace. Let's also keep in mind that the days of people spending $1,700-$3,000 on a personal computer outside of enthusiast circles are virtually over. Since the introduction of sub-$1,000 computers prices have done nothing but fall for everything but enthusiast geared hardware. People have been used to cheap computer hardware for several years now. Even if AMD departed from the CPU market totally tomorrow, we wouldn't see much if any difference in Intel's pricing structure. Intel knows that a drastic increase in the prices of existing products would cause their sales to halt virtually over night. Intel didn't get to be where they are by being stupid or failing to understand the market and it's demands.
 
I dont see where you get that AMD is giving up on CPU's... that article is obviously trying to make more of a story then is there but i think AMD is simply going to start pushign this idea of ap latform a all exclusive AMD cpu, gpu etc setup as they move forward int he enxt couple years with fusion like ideas. I firmly believe we will see the day again when the fastest Desktop Processor isnt from Intel.
 
theyll sell more stuff eventually. this is PROBABLY to hold them over until the can finalize, tape out, and then release bulldozer in Q4 2010 / Q1 2011
 
The CPU wars are not over just yet. If AMD sets aggressive prices on upcoming Athlons and makes a few breakthroughs they COULD pull another rabbit out of their hat.

For instance Intel is getting a little stupid with upcoming Atom pricing. If AMD pulls a stunner by letting loose an Ion Competitor with a 10W dual core and GPU based on a 40nm process early to mid 2010 they can make Intel look foolish for once.
 
http://sites.amd.com/us/vision/Pages/vision.aspx
http://sites.amd.com/us/vision/entertainment/Pages/ultimate.aspx

VISION Technology - Ultimate


For a superior digital entertainment experience
The name really says it all.

VISION - Ultimate offers more discerning users the cutting-edge, high-definition performance they need to enjoy-and even create-rich, vivid, superior HD entertainment.1

These systems sport high-end multi-core processors and top-quality discrete graphics cards that feature DirectX® 10.1 for high-definition realism that puts you right in the middle of the action.

* Crisp, smooth detail when streaming from online movie services or playing the latest Blu-ray movies 1,2,3
* Record live TV to enjoy when and where you like 4
* Immerse yourself in 3D games with life-like graphics and quick-response action
* Enjoy music and movies in 5.1 surround sound (or better)
* Create your own HD movies and music to share with the world1,4

If you can imagine it, you can do it with the speed and responsiveness to cruise through even the most demanding tasks, at home or on the road. You'll easily connect to virtually any of the people, devices and places you want.

Systems featuring VISION - Ultimate can also include AMD Fusion Media Explorer, which is an incredibly convenient tool for managing your music, movies, and photo collections as well as recording and watching live TV.4

VISION - Ultimate lets you see, share, and create your world at the highest level.

Ready for more? We have a few tips, tools and videos will help you have a better experience on your computer with VISION from AMD.


1. HD monitor required
2. Internet access required (online movie access)
3. Blu-ray drive required
4. Additional hardware/software required

How does this differ from the current and previous market segments for AMD's top desktop/home parts?
 
Sounds to me like AMD is trying to take a shot at something like Centrino. Sell the platform on the strength of their GPUs (something Intel has no hope of matching) to be able to move CPUs. I didn't get the impression AMD was giving up on CPUs at all, far from it.
 
If we're talking purely integrated desktop platforms (which get reported once a week as dying..along with PC gaming) AMD can provide a better experience for less money to the vast majority of Best Buy type customers right now. Laptop or mobile...........they have to do a lot better on battery life.

Otherwise though, smart move.
 
Yeah I dont see where AMD is talking about making LESS capable processors - theyre just changing up their marketing strategies a bit.

I think AMD has shown they have no intention of going gently into that good night. With the release of the Phenom II X2 and X3 chips that are competitive with Intel chips at nearly double the price, I think theyve shown they mean to go down swinging.

As far as Intel not having stupid pricing, I have a very good feeling that if AMD wasnt kicking Inte's rear in the dollar for dollar category, there would never have been a $200 i5 line.
 
Its a new marketing campaign, no where does it say they're giving up on CPUs. Seems like a lot of speculation to me.

Personally, I'm holding out for the ol' "Bait 'n switch".
 
This is just a biased article written in a way to make amd look like its going down the hole. Everything is going to be the same still. No ones going to stop making good processors here. Propaganda from Intel most likely lol.
 
Yeah I dont see where AMD is talking about making LESS capable processors - theyre just changing up their marketing strategies a bit.

I think AMD has shown they have no intention of going gently into that good night. With the release of the Phenom II X2 and X3 chips that are competitive with Intel chips at nearly double the price, I think theyve shown they mean to go down swinging.

As far as Intel not having stupid pricing, I have a very good feeling that if AMD wasnt kicking Inte's rear in the dollar for dollar category, there would never have been a $200 i5 line.

i agree people have forgotton what a intel dominated market can be like.. The fact that AMD was hear fightign it out at the mid tier is what kept intel honost this last couple rounds... frankly I think intel is gonna reap huge losses when they make this discrete GPU go around.. they gonna put tons into development or already have and then they will advertise and hype the heck outa it but its at best gonna be lackluster and i frankly dont see it stealing much from amd or nvidia.. course less intel buys nvidia out. If amd can produce fusion with bulldozer as promised and get it out 6 months to 1 year before intel has a similar product it think amd will kill with business and mainstream machines. imo
 
looks more to me that AMD is shifting their marketing to where the money is.. and GPU's/gfx cards are where the money is.. if they can out perform nvidia and price competitively like they have been.. it would be a very good thing for this.. just look at the math when it comes to upgrading.. the avg consumer usually changes gfx cards every 2-3 years where as cpu's its anywhere from 3-5 years.. now if only they would friggin release HAVOK already or allow cuda to work on their gfx cards then they would really be able to compete against nvidia..
 
Yeah... title of the thread is way off.

Why would AMD bother to talk 6-core consumer/12-core business platforms if they were done with the CPU market?

Also, it says the article is from the New York Times... WHO READS THAT RUBBISH?! It's just as bad as Fud and Inq. The difference is the most tech-saavy New York Times writer knows LESS than anybody over at Fud.

What a ridiculous troll thread from a guy that claims not to be a troll because OMG he has an AMD system, too.

If anything, AMD is going to begin to hit the CPU market harder, slowly but surely, now that Global Foundries is really starting to grow. I wouldn't say we'll see something in this generation outside of a 6-core desktop chip, and Intel will most assuredly beat them to desktop 32nm, but to say they're officially done competing in a market they were more outclassed in 2 years ago than they are now is just dumb. Compare AMD in 2007 to AMD in 2009 and you'd be hard-pressed to say now is the time they throw in the towel.
 
Ya know, if AMD wanted to go the route of concentrating on their graphics line as their bread winner since they are clearly on top in that area and then continue to offer Phenom II Black Editions at $100-150 prices to keep Intel sweating and constantly having to offer better value, thats not a bad idea.

This way they have the best graphics cards across the board and theyll own the $200 and below CPU line. That would suit me just fine as I doubt I will EVER pay more than $150 for a CPU, so if AMD wants to continue cranking out awesome chips like the X2 550, with unlocked multipliers that can OC to nearly 4 GHz all for $100 shipped - Ill be more than happy to buy them and wont lose a moments sleep over not having the latest and greatest i5, 3 or whatever.
 
Ya know, if AMD wanted to go the route of concentrating on their graphics line as their bread winner since they are clearly on top in that area and then continue to offer Phenom II Black Editions at $100-150 prices to keep Intel sweating and constantly having to offer better value, thats not a bad idea.

Unfortunately it doesn't work that way. AMD can't simply undercut Intel's pricing and expect to gain massive amounts of marketshare. It would take agressive marketing and probably a greater manufacturing capacity than they have available to them to do that. These are two things AMD can't really afford.

This way they have the best graphics cards across the board and theyll own the $200 and below CPU line. That would suit me just fine as I doubt I will EVER pay more than $150 for a CPU, so if AMD wants to continue cranking out awesome chips like the X2 550, with unlocked multipliers that can OC to nearly 4 GHz all for $100 shipped - Ill be more than happy to buy them and wont lose a moments sleep over not having the latest and greatest i5, 3 or whatever.

Owning the CPU and graphics chip market are two different things. How would having the graphics card market ensure them ownership of the $200 and lower price point? That doesn't make any sense.
 
Unfortunately it doesn't work that way. AMD can't simply undercut Intel's pricing and expect to gain massive amounts of marketshare. It would take agressive marketing and probably a greater manufacturing capacity than they have available to them to do that. These are two things AMD can't really afford.

Uh, this thread is about an article about AMD ramping up its marketing efforts. Clearly they think they can afford it ;)

Owning the CPU and graphics chip market are two different things. How would having the graphics card market ensure them ownership of the $200 and lower price point? That doesn't make any sense.

By doing what Intel did with Centrino. Play up the (false, of course) idea that AMD's GPUs are faster when paired with AMD's CPUs. Give it some catchy name and the public eats that crap up. Mainstream gamers are incredibly stupid and gullible. I remember one kid that thought he bought a wicked fast SLI gaming machine - too bad the two cards that the OEM sold him were 8500 GTs, which, of course, meant it was slow as balls in games. AMD can use the same stunt to "trick" gamers into buying their slower CPUs to get their faster GPUs.
 
Part of me really wishes that AMD would be bought out by IBM or another large player that could give AMD the financial backing to REALLY innovate and compete against intel. I know it's a pipe dream, but if AMD had a processor equal or better than i7 right now, we'd all win.
 
Yeah... title of the thread is way off.

Why would AMD bother to talk 6-core consumer/12-core business platforms if they were done with the CPU market?

Also, it says the article is from the New York Times... WHO READS THAT RUBBISH?! It's just as bad as Fud and Inq. The difference is the most tech-saavy New York Times writer knows LESS than anybody over at Fud.

What a ridiculous troll thread from a guy that claims not to be a troll because OMG he has an AMD system, too.

If anything, AMD is going to begin to hit the CPU market harder, slowly but surely, now that Global Foundries is really starting to grow. I wouldn't say we'll see something in this generation outside of a 6-core desktop chip, and Intel will most assuredly beat them to desktop 32nm, but to say they're officially done competing in a market they were more outclassed in 2 years ago than they are now is just dumb. Compare AMD in 2007 to AMD in 2009 and you'd be hard-pressed to say now is the time they throw in the towel.

The title of the thread is the title of the Article. I'm posting it because I think it is relevant.

As for being a troll... I posted this because I thought it relevant (being a fan of AMD/ATi GPUs). But since the article talked more about CPUs than GPUs I posted it here. It makes sense to me that AMD would drop the emphasis on its CPU performance because they simply have nothing to compete with right now (if they drop the pricing on their Phenom II X4 things are going to get worse as those chips are HUGE and cost a lot to fab).

I own more than one AMD system.. have a look for yourself:

http://i845.photobucket.com/albums/ab15/z3r0c00l_2009/DSC00003.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC02122.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC01903.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC01459.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC00859.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Systems/DSC02173.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Systems/DSC02078.jpg

That's just a small example. I am an enthusiast and always have been. I have working boxes from a 80486 all the way to a Corei7/Phenom II X4. I also have nearly every single CPU model ever released.. and here is a sample:

http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/VIAC3.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PIIISOCKET370.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PIIISLOT1.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PentiumPro.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/Pentium.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/NETBURST.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK62.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK62.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK7b.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK7a.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK6.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/386-486.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/DSC00799.jpg

The thread is relevant if you've read this:
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

Notice how AMD GPUs are far more powerful and efficient than anything out there? It makes sense for AMD to concentrate and leverage on that success (especially considering their GPU business is making money). Currently their developer tools for their GPGPU Stream capabilities are behind nVIDIAs CUDA tools (the only reason why nVIDIA GPUs are faster for Folding@Home).

I'm seeing AMD leverage this power (especially with the RV870). I expect a full OpenCL announcement at the same time as the announcement of RV870 and I see AMD pushing OpenCL REALLY HARD come September 22nd. Eye Infinity is just one way of leveraging that power. I cannot wait for the official release.
 
Last edited:
The title of the thread is the title of the Article. I'm posting it because I think it is relevant.

As for being a troll... I posted this because I thought it relevant (being a fan of AMD/ATi GPUs). But since the article talked more about CPUs than GPUs I posted it here. It makes sense to me that AMD would drop the emphasis on its CPU performance because they simply have nothing to compete with right now (if they drop the pricing on their Phenom II X4 things are going to get worse as those chips are HUGE and cost a lot to fab).

I own more than one AMD system.. have a look for yourself:

http://i845.photobucket.com/albums/ab15/z3r0c00l_2009/DSC00003.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC02122.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC01903.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC01459.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Athlon64 Pics/DSC00859.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Systems/DSC02173.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/Systems/DSC02078.jpg

That's just a small example. I am an enthusiast and always have been. I have working boxes from a 80486 all the way to a Corei7/Phenom II X4. I also have nearly every single CPU model ever released.. and here is a sample:

http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/VIAC3.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PIIISOCKET370.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PIIISLOT1.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/PentiumPro.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/Pentium.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/NETBURST.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK62.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK62.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK7b.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK7a.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/AMDK6.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/386-486.jpg
http://img.photobucket.com/albums/v51/ElMoIsEviL/CPU/DSC00799.jpg

The thread is relevant if you've read this:
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

Notice how AMD GPUs are far more powerful and efficient than anything out there? It makes sense for AMD to concentrate and leverage on that success (especially considering their GPU business is making money). Currently their developer tools for their GPGPU Stream capabilities are behind nVIDIAs CUDA tools (the only reason why nVIDIA GPUs are faster for Folding@Home).

I'm seeing AMD leverage this power (especially with the RV870). I expect a full OpenCL announcement at the same time as the announcement of RV870 and I see AMD pushing OpenCL REALLY HARD come September 22nd. Eye Infinity is just one way of leveraging that power. I cannot wait for the official release.

Don't tell me you just did a GFLOPS comparison across architechtures?
 
Ummm I didn't do it.

Real World Tech is a website filled with engineers and scientists. They write articles looking into the architectures. You should read that article.

Read this:

This is not a true ‘apples to apples’ comparison, but a start in the right direction. Theoretical peak FLOP/s is not a great metric because it does not consider utilization in real software, nor does it reward investments in ease of use and programming. In an ideal world, we would measure performance and energy consumed on a given set of benchmarks. However, choosing a set of benchmarks is very complicated, especially given the myriad of different instruction sets and capabilities. There are standards for comparing CPUs, e.g. SPECcpu2006, and there are some standards (albeit somewhat dodgy) for comparing GPUs - but nobody has established a consistent and fair way to compare CPUs and GPUs. Additionally, getting access to all this hardware to run a set of benchmarks would be very challenging. Rather than delving into that morass of complexity, we instead opted to focus on a simpler performance number that is tied to physical quantities (frequency and execution width) and hence readily available. Other complicating details include:
* Some GPUs and Cell do not fully support the IEEE 754 double precision standard.
* GPUs and CPUs typically require additional chips to make a complete system. For instance, GPUs need host processors, some CPUs need external memory controllers or caches. We do not estimate the area and power costs of these supporting chips.
* GPUs and CPUs use different process nodes which are not always directly comparable, and process technology heavily influences power and density.
* GPUs have a very restricted programming model; they do not run certain workloads, cannot boot an operating system and require a host processor.
* GPU power numbers may be system level and may include graphics DRAM or other components.
* Some CPUs and GPUs have exotic or expensive cooling systems, which substantially lower power consumption by reducing junction temperature and leakage, but add cost.
* Server CPUs have high capacity memory systems, high bandwidth coherent interconnects and large caches to enable scalable systems; these don’t contribute to FLOP/s and cost power and area but are essential for key workloads (ERP, OLTP, etc.).
* CPU and GPU vendors measure TDP or power differently in some cases.
* CPUs (especially for servers) have much more extensive reliability features such as error correction than current GPUs.
* Performance/watt and performance/mm2 vary with product SKUs and frequency bins.
compute-efficiency-1.png

But yes.. if you had the software available.. the RV770 would crush the GT200b (with ease might I add) when it comes to pure computational performance. The problem is that AMD wasn't really concentrating on their GPU development (until now with the introduction of Vision). I am of the opinion that we have quite a few surprises in store very soon but that's another topic entirely.

Also... do you know who the user: JumpingJack is? He approves of the article itself: http://forum.xcpus.com/graphics-displays/18400-gpu-v-cpu-efficiency-examined.html#post295105

He is just about the most knowledgeable person online when it comes to CPU and GPU architectures. You'd have to engage him in a conversation in order to find that out for yourself. What's holding AMD back is software. Now if you're talking games.. what was primarily holding AMD back was Texturing and Render Back End performance (not computational performance). You could argue that simple shaders executed more efficiently on nVIDIAs architecture than they do on AMDs highly parallel architecture but that has nothing to do with GPGPU computational performance. An example of all of this is Crysis (the game) which is far more Texture intensive than it is shader intensive. When you look at the ALU:TEX ratio nvIDIA GPUs have a far lower ratio of 2:1 or 3:1 (the ALU:TEX ratio for RV770 is 4:1) although the RV870 doesn't change this, it does double the TMUs (80 TMUs now) but also exponentially increasing the ALUs (1600 now). So this is why Crysis can now run quite well on the RV870 (as we've seen in countless videos available online on youTube) as the bottleneck for Crysis performance was Texturing performance (just look at the putrid performance put out by the RV670).

You could ask.. then why is nVIDIAs current architecture not eating up Crysis (since it supposedly has 80TMUs). Well this is because once you enable AF... nVIDIAs TMU architecture acts like a 40TMU architecture (unlike AMDs) for the AF algorithms. So although it's not a full 80TMU architecture it acts like something between a 40 and 80 TMU architecture varying on the texturing job being processed (whereas, as you'll see on Sept 22nd, RV870 is a true 80TMU design).
 
Last edited:
FLOPS are the worst PR invetion ever:
http://foldingforum.org/viewtopic.php?f=51&t=10442



http://www3.interscience.wiley.com/cgi-bin/fulltext/121677402/HTMLSTART?CRETRY=1&SRETRY=0

The performance benchmarks are for an Nvidia GTX 280 and an ATI 4870. On paper, the 4870 has the advantage with higher theoretical peak FLOPS, but for this folding implementation the GTX 280 is

  • 100% quicker for small proteins (~500 atoms)
  • 40% quicker for medium (~1200 atoms, the largest we are currently folding)
  • 20% quicker for large proteins (~5000 atoms)
This is despite the ATI card doing up to twice as many FLOPS during the calculations.
 
Unfortunately it doesn't work that way. AMD can't simply undercut Intel's pricing and expect to gain massive amounts of marketshare. It would take agressive marketing and probably a greater manufacturing capacity than they have available to them to do that. These are two things AMD can't really afford.

Obviously coming out with a better performing CPU is the best way to gain market share but AMD has been coming in behind Intel for several years now so it doesnt seem like thats going to happen anytime soon. Im sure theyre working on it but in the meantime rather than just continue to lose market shares, go after guys like me that would rather buy the $100 P2 550 than the $190 E8500. Im not saying this is the super master genius strategy that will allow AMD to rule the world, just that if your CPU's cant outperform Intels clock for clock, market them as outperforming them dollar for dollar.

Owning the CPU and graphics chip market are two different things. How would having the graphics card market ensure them ownership of the $200 and lower price point?

It wouldnt. By making graphics cards that perform better than their Nvidia counterparts, which they pretty much do now, they own the graphics market. Obviously theyve got that figured out so let that be their main bread winner. By making CPU's that have near or equal performance to their Intel counterparts but cost half as much, which they pretty much do now also, theyll own that market as well.
 

You're posting a research paper showcasing FLOPs and then you showcase an individuals opinion regarding the results and how he thinks they relate to Folding@Home scores.

Do you seriously think the two have anything to do with actual computational performance? First of all the AMD Brook+ code (the software) is not even close to being able to truly harness the power of the AMD RV770 GPU (the point of my argument). So to use Folding@Home (who's AMD client is coded in this fashion) as an example or to prove your point is an intellectual fallacy (it's like ignoring that Car 1 has less fuel than Car 2 and then pointing out that Car 1 can't travel as far as Car 2). Then of course there is the obvious... the article I linked discussed Double Precision computational performance whereas what you link discusses single precision (simple calculations). And I have to add more, the article I linked explicitly mentioned the need for a Host Processors to feed the GPUs information. Feeding a highly parallel number crunching monster means the need for a faster host processor to feed it (and one which also is increasingly parallel thus adding more complexity to the equation as the software to feed the data must also be made multi-threaded).

So as I stated before.. AMD moving to Vision and concentrating on their GPUs will likely mean better tools to harness the performance of their GPUs.

GFlops are representative of peak theoretical performance. How you obtain anything near that performance will be decided by how well you can tap it (api/code/software).

The perfect example can be seen when you compare individual shader performance:

b3da018.png

Link: http://forum.beyond3d.com/showthread.php?t=49327&page=2

What you're seeing here is that the more calculations that are needed.. the large the performance gap. So if you can truly tap into the computational performance of the RV770 it is around 200% faster than a GT200. This requires complexity (such as Grid which uses several streaming shaders for FoV, DoF and motion blur as well as the usual shaders all at the same time).

It's like saying that a single core Core i7 @ 3GHz is faster than a Quad Core with HT Core i7 @ 2.8GHz (all the while limiting yourself to only single threaded apps to get your point across). The software is a VERY important factor. you're about to see the importance of computational performance as nVIDIA ditches it's old strategy and releases something that is just as or far more parallel than what AMD has with the GT300. Rumors currently point to an MIMD architecture (Multiple Instruction stream, Multiple Data stream).

Yes I too am very knowledgeable in this area lol.
 
Last edited:
You're posting a research paper showcasing FLOPs and then you showcase an individuals opinion regarding the results and how he thinks they relate to Folding@Home scores.

Do you seriously think the two have anything to do with actual computational performance? First of all the AMD Brook+ code (the software) is not even close to being able to truly harness the power of the AMD RV770 GPU (the point of my argument). So to use Folding@Home (who's AMD client is coded in this fashion) as an example or to prove your point is an intellectual fallacy (it's like ignoring that Car 1 has less fuel than Car 2 and then pointing out that Car 1 can't travel as far as Car 2). Then of course there is the obvious... the article I linked discussed Double Precision computational performance whereas what you link discusses single precision (simple calculations). And I have to add more, the article I linked explicitly mentioned the need for a Host Processors to feed the GPUs information. Feeding a highly parallel number crunching monster means the need for a faster host processor to feed it (and one which also is increasingly parallel thus adding more complexity to the equation as the software to feed the data must also be made multi-threaded).

So as I stated before.. AMD moving to Vision and concentrating on their GPUs will likely mean better tools to harness the performance of their GPUs.

GFlops are representative of peak theoretical performance. How you obtain anything near that performance will be decided by how well you can tap it (api/code/software).

The perfect example can be seen when you compare individual shader performance:

b3da018.png

Link: http://forum.beyond3d.com/showthread.php?t=49327&page=2

What you're seeing here is that the more calculations that are needed.. the large the performance gap. So if you can truly tap into the computational performance of the RV770 it is around 200% faster than a GT200. This requires complexity (such as Grid which uses several streaming shaders for FoV, DoF and motion blur as well as the usual shaders all at the same time).

It's like saying that a single core Core i7 @ 3GHz is faster than a Quad Core with HT Core i7 @ 2.8GHz (all the while limiting yourself to only single threaded apps to get your point across). The software is a VERY important factor. you're about to see the importance of computational performance as nVIDIA ditches it's old strategy and releases something that is just as or far more parallel than what AMD has with the GT300. Rumors currently point to an MIMD architecture (Multiple Instruction stream, Multiple Data stream).

No, I am posting real world performance over empty teorethical maximum...learn to differentiate.
You sound like when console fans tried to use GFLOPS as an indication of the "powah" hidden in thir consoles...and fails.

Yes I too am very knowledgeable in this area lol.

Like you "know" about TripleHead2Go?:

http://www.hardforum.com/showpost.php?p=1034609852&postcount=44

Show me real world difference, not empty GFLOPS...until then, it's all a pipedream...
 
No, I am posting real world performance over empty teorethical maximum...learn to differentiate.
You sound like when console fans tried to use GFLOPS as an indication of the "powah" hidden in thir consoles...and fails.



Like you "know" about TripleHead2Go?:

http://www.hardforum.com/showpost.php?p=1034609852&postcount=44

Show me real world difference, not empty GFLOPS...until then, it's all a pipedream...

Uhh I just showed you the real difference in that graph and linked you the source. You're just not that knowledgeable and cannot comprehend it or you're a rabid fanboi wasting my time (any engineers in the field would acknowledge the validity of my statements). The graph shows computational performance (it actually mirrors the theoretical performance difference).

Of course other things come into play in a game (such as RBE and TMU throughput) in determining the overall performance. The Graph ONLY highlight computational performance.

As for my knowledge of the TripleHead2Go.. I was stating that it did not natively support games (on the premise of performance). I was also under the assumption that folks were talking about TripleHead (The technology on the Parhelia) as folks were discussing Graphics card features and someone came in stating that it's already been done on a graphics card (which would insinuate Parhelia). I am not the only one who hold's this opinion. The Parhelia just did not have the Graphics power to push most game titles (it fell short.. VERY short of expectations).

TripleHead2Go is not an integrated technology which sits natively on the GPU core. It's an added box. Not at all comparable to EyeInfinity.

Also we're talking about 6 screens for the RV870 (not just 3).

Your folding@home test was not real world performance. It is limited by the tools used to code it (Brook+). Anyone who has coded with Brook+ will tell you it is inferior to CUDA and much harder to optimize and code for. This is where the discrepancy in performance comes from. If you look at actual shader throughput with complex shaders it's not even a fair fight (Do you actually know the difference between simple shaders and double precision?). I don't think you possess the tools necessary (knowledge) to debate this with me. Without a greater understanding of GPU architectures you're best to simply sit back and learn.
 
Last edited:
Uhh I just showed you the real difference in that graph and linked you the source. You're just not that knowledgeable and cannot comprehend it or you're a rabid fanboi wasting my time (any engineers in the field would acknowledge the validity of my statements). The graph shows computational performance (it actually mirrors the theoretical performance difference).

Of course other things come into play in a game (such as RBE and TMU throughput) in determining the overall performance. The Graph ONLY highlight computational performance.

As for my knowledge of the TripleHead2Go.. I was stating that it did not natively support games (on the premise of performance). I was also under the assumption that folks were talking about TripleHead (The technology on the Parhelia) as folks were discussing Graphics card features and someone came in stating that it's already been done on a graphics card (which would insinuate Parhelia). I am not the only one who hold's this opinion. The Parhelia just did not have the Graphics power to push most game titles (it fell short.. VERY short of expectations).

TripleHead2Go is not an integrated technology which sits natively on the GPU core. It's an added box. Not at all comparable to EyeInfinity.

Also we're talking about 6 screens for the RV870 (not just 3).

Your folding@home test was not real world performance. It is limited by the tools used to code it (Brook+). Anyone who has coded with Brook+ will tell you it is inferior to CUDA and much harder to optimize and code for. This is where the discrepancy in performance comes from. If you look at actual shader throughput with complex shaders it's not even a fair fight (Do you actually know the difference between simple shaders and double precision?). I don't think you possess the tools necessary (knowledge) to debate this with me. Without a greater understanding of GPU architectures you're best to simply sit back and learn.

Your graph claims:
183% for GRID...real world would like a word with you:
http://www.pcgameshardware.com/aid,...ersus-Nvidia-Geforce-GTX-275/Reviews/?page=13

You math is flawed...like I said...GFLOPS are useless...let see why:
AMD claims higher GFLOPS...but looses out in Folding @ home...realworld > theoretical
AMD claims higher GFLOPS...but looses out in GRID...realworld > theoretical


Why don't you show me REAL WORLD numbers...and not theoretical useless numbers?

It's always "if you tap in the the MAGIC the RIGHT way...it should work better"...while the real world performance whows a different tale. :rolleyes:
 
That article didn't say that AMD is going to quit making CPUs. It said that they are changing their marketing strategy and concentrating more on graphics chips.

It didnt even say that, its PR, comon. "game man" is making something out of nothing.
 
Last edited:
I think this is relevant:
http://crd.lbl.gov/~dhbailey/dhbpapers/twelve-ways.pdf

[FONT=TimesNewRoman,Bold]Twelve Ways to Fool the Masses When Giving [/FONT][FONT=TimesNewRoman,Bold]Performance Results on Parallel Computers[/FONT]
[FONT=TimesNewRoman,Bold]David H. Bailey[/FONT]
[FONT=TimesNewRoman,Bold]June 11, 1991[/FONT]
[FONT=TimesNewRoman,Bold]Ref: Supercomputing Review, Aug. 1991, pg. 54--55[/FONT]
 
That's only relevant to your arguments, which in turn are barely relevant to the subject.

Elmo, ur clear IMO. I *thought* it was you who'd sorta crapped on a dif. thread I was interested in, but it wasn't anyway and I suck as a result. Sorry! Atech, though, blech.

Here's soemthing that actually IS relevant to the topic:

AMD recently revealed its AMD Vision branding which helps to make clear separations between different levels of AMD-based notebooks. In addition to the current three levels – Vision Basic, Vision Premium and Vision Ultimate – the company will add the high-end Vision Black to the categories in the first quarter of 2010. ~Digitimes

AMD’s 2009 notebook platforms, also announced today, serve as the first proof points for VISION Technology. Mainstream OEM notebooks offered with VISION have next-generation HD graphics technology for rich, vivid HD and Blu-ray video playback, life-like 3D games, brilliant, clear photos, and multi-tasking power for editing photos, music and videos. ~AMD

AMD's site is pretty clear cut when it says VISION will not be something marketable in a desktop sense until 2010, and is literally just marketing jazz. Then, when it comes to desktop, it will add a 4th tier to VISION which is VISION Black.

In summary, the whole VISION marketing bit is really to tell consumers "if you want to check email, pick this. If you want to do hours of photoshopping, choose this." It's perfect for in-store since most consumers just want a PC that fits their needs. Having said this, again, ITS A NEW YORK TIMES ARTICLE. I knew more about PCs in 3rd grade than their most tech saavy writer knows now.
 
AMD bought ATi for a reason...I beleive they are going to integrate graphic chips with cpu chips, and in Intel's case, you will buy a board, a cpu and a card, and in amd's case, you will just buy the board and "the chip". The chip will be made in such a manner that it will run both system and graphics at incredible speed. The cpu part of the chip will be carefully tweaked to push the gpu part to it's limit, and you will have a gaming rig Intel wont be able to compete with. Sure, pure processing power for business applications will stay on Intel's side, but amd will be THE rig for gamers. That's all IMHO, of course, but that's what I'do if I were AMD...
 
The only way in heck that they can pull off even 5870 graphics with fusion is to have this graphics chip on the CPU able to crossfire with a very powerful chipset. Which means it wont be able to be passively cooled or effectively passively cooled.

They wont go that route. The reason the chipmakers want to put graphics on the CPU die is because of ever increasing demands on the chipset. If graphics were lifted from the chipset they can go back to smaller heatsinks.
 
As for Intel gouging, I don't know. I bought my i7 920 in the first week they were available and paid ~$180 (granted that was with the MS cashback but still). I am happy.
 
Back
Top