Join us on November 3rd as we unveil AMD RDNA™ 3 to the world!

I am really thinking this time we are going to see little bigger 7950xt. They can do a lot with the refresh. The rumors of higher core count navi might be bigger chip. I mean GCD is only 300mm so so much to work with there. They can have a big refresh even before RDNA 4.
A little would probably go a long way I imagine. I mean all the cache that takes up tons of die space are on the chiplets. The GCD is all logic... even if they decided to go up to 350mm or so the amount of extra horsepower would be impressive. I don't know if they will... heck this might be it for this gen. I think of it like early zen. As ground breaking as zen 1 was, it was the follow ups that took the idea and ran with it. I think this gen they where probably conservative. Make sure it works they couldn't afford to have a generation fall on its face. Knowing how Su dealt with Zen though I'm sure they already have the next gen well on the way... cause they probably are going to use the exact same cache chiplets. I wouldn't be shocked if they already had very early engineering bits for the next gen.
 
I think there is a bit of assumption about how easy scaling would be.

There is may be a reason a lot of the 7900xt numbers are around 5/6 relative to the 7900xtx numbers (does the binning made to have just 5 perfectly working MCD ?) and for the NAVI32 to see a lot of 4/6 figure.

It look like you have some 16 CU by MCD design, it would be going to 8 MCD ? going up to 512 bits memory bus, 256 MB infinity cache ?

All possible I would imagine, but I would also imagine that in their designing, testing, modelling, they stopped has high has it worked well enough, depending on how much power the interconnect is using it could start to add massive power-heat-latency overhead has it get bigger.

If you add 85mm for the MCDs and 100mm more on the GCD, does that 700-725 mm size for the full package fit in the same process used now ? does all the cost reduction versus Nvidia vanish ?

Seem all possible, but the notion that it would be easy seem pushing it (I would go into we really do not know territory)
 
Last edited:
I think there is a bit of assumption about how easy scaling would be.

There is may be a reason a lot of the 7900xt numbers are around 5/6 relative to the 7900xtx numbers (does the binning made to have just 5 perfectly working MCD ?) and for the NAVI32 to see a lot of 4/6 figure.

It look like you have some 16 CU by MCD design, it would be going to 8 MCD ? going up to 512 bits memory bus, 256 MB infinity cache ?

All possible I would imagine, but I would also imagine that in their designing, testing, modelling, they stopped has high has it worked well enough, depending on how much power the interconnect is using it could start to add massive power-heat-latency overhead has it get bigger.

If you had 85mm for the MCDs and 100mm more on the GCD, does that 700-725 mm size for the full package fit in the same process used now ? does all the cost reduction versus Nvidia vanish ?

Seem all possible, but the notion that it would be easy seem pushing it (I would go into we really do not know territory)
I know once you get into the 700mm^2 size for the interconnect you have to cut the speeds down to the 2.7 TBps range from the 5.3 it currently sits at, so there is a practical limit for the interposer which is not terribly different from any other chip and the larger that gets the higher the failure rate there as well as it is another piece of silicon that needs to be etched out, which while not complex is still another wafer you have to use up. The wafer for the interposer may be on a cheaper process but it still uses up a wafer that could be used elsewhere so there is a raw material cost associated with it that isn't small.
Chiplet designs are not an end all be all cheaper better solution, they are a tightrope, and you have to weigh your options there pretty carefully.
 
Last edited:
Is there any reason to need one big interconnect? Is it possible to have two under each half to connect to the MCs?
 
Is there any reason to need one big interconnect? Is it possible to have two under each half to connect to the MCs?
That's where 3D stacking comes in, it gets really interesting then but complex and complexity inherently inflates cost, and they want to keep costs lower.
Here is a diagram from an older TSMC release but the fundamentals remain the same.
1667769525646.png
1667769535672.png

TSMCs interposer technology is a lot more complex and advanced than many probably realize.
 
I know the tricky part is getting everything to line up, but that seems to imply it's completely possible to use two or possibly even four smaller interconnects.

Which means you could theoretically go to past 450 on the GCD and surround it with up to 9 MCs without even going up to 1-hi, running 32 gigs of memory. So about 1,000 mm^2 of working silicon on these same nodes?
 
I know the tricky part is getting everything to line up, but that seems to imply it's completely possible to use two or possibly even four smaller interconnects.

Which means you could theoretically go to past 450 on the GCD and surround it with up to 9 MCs without even going up to 1-hi, running 32 gigs of memory. So about 1,000 mm^2 of working silicon on these same nodes?
You can TSMC and Broadcom have partnered up and developed a method for interlocking them allowing for a max interposer size of 3400mm2. But it is not cheap and currently only used for their enterprise equipment.

But at that size transfer speeds are slower and latency from end to end is both measurable and statistically significant.

Look at the MI250x it’s up at a whopping 1540mm2 once all the chips are put in there.
 
Last edited:
I know the tricky part is getting everything to line up, but that seems to imply it's completely possible to use two or possibly even four smaller interconnects.
Would that not going back to the old SLI type design with everything that come with it issues wise, if everyone does not share the same L3 ? A bit like it happen on the CPU side, but more because of the superbe multithread-similar workload on all the chips nature of GPU work ?

It is possible that both the acting has a single large infinity cache because of a ridiculous more than twice an m1 ultra interconnect speed is what make everything work like a single regulat monolitic GPU for the CPU-game engine side out of the box.

If you loose either (acting has a common cache because of lower speed or cut in 2) maybe you start to act more like having 2 GPU ?
 
Would that not going back to the old SLI type design with everything that come with it issues wise, if everyone does not share the same L3 ? A bit like it happen on the CPU side, but more because of the superbe multithread-similar workload on all the chips nature of GPU work ?

I'm not sure why it would, since the infinity cache is already split off from the GCDs on the MCs. If something has to go from MC to another, it transfers across the GCD, not the interconnect. Unless I'm totally misreading how the interconnect works. I thought it was basically a dumb layer that worked like a tiny, high-bandwidth PCB.
 
I'm not sure why it would, since the infinity cache is already split off from the GCDs on the MCs. If something has to go from MC to another, it transfers across the GCD, not the interconnect. Unless I'm totally misreading how the interconnect works. I thought it was basically a dumb layer that worked like a tiny, high-bandwidth PCB.
The interconnect as you are calling it is the interposer tech that I posted above, it's actually pretty complicated and it is a chip in itself, probably done up on TSMC 10nm, or possibly 7nm.
 
I'm not sure why it would, since the infinity cache is already split off from the GCDs on the MCs. If something has to go from MC to another, it transfers across the GCD, not the interconnect. Unless I'm totally misreading how the interconnect works. I thought it was basically a dumb layer that worked like a tiny, high-bandwidth PCB.
If all the GCD can see all the MCs with the interconnect (that why make it seem like a single giant cache) I am not sure why it would even need to go from an MC to another (or be perceived to have to by the GCD), the issue if I understand the point was all the GCD loosing the ability to see all the MC at the same time, which would make to start to matter on which part of the cache things are and being 2 different one.
 
the issue if I understand the point was all the GCD loosing the ability to see all the MC at the same time, which would make to start to matter on which part of the cache things are and being 2 different one.

If that's the case then I misunderstood how much muscle was in the interconnect/interposer. I thought that it saw six individual caches and any info that exchanged between them was done on the GCD.

You could get around that by going 1-hi and use some of the cache to mirror the info on the 0-hi layer from the opposite side of die. But then you're probably running into build cost issues that don't mitigate the cost of a bigger interposer.
 
. I thought that it saw six individual caches and any info that exchanged between them was done on the GCD.
Why would a cache exchange with an other cache ?

The way I see it, GCD use them, and because all of the CU see all the MCD, all the logics can do compute with all the same data like if it was a large L3 cache on a monolotical die. If some CU start to see only some MC and some other CU see some others you break that concept.

Obvious warning, I know nothing about anything.
 
The way I see it, GCD use them, and because all of the CU see all the MCD, all the logics can do compute with all the same data like if it was a large L3 cache on a monolotical die.

That's the way I thought it worked, too. In which case what difference does it make if they're on separate interposers, they all look the same to the GCD.
 
That's the way I thought it worked, too. In which case what difference does it make if they're on separate interposers, they all look the same to the GCD.
If you mean has long has they connected by something very fast like now (or Infinity frabic that connected the L3 on rdna2) ? I am not sure what we are talking about if that the case, I am completely lost.
 
So is consistently being top dog in performance and features.
Yep. Who wants to spend for high end graphics and not even be able to enable high end graphical features? :ROFLMAO: DLDSR, DLAA, DLSS, shadow play/nvenc, raytracing performance, Cuda support, better drivers/control panel on Nvidia... and best raster performance to boot! Plus nice and quiet cards.
 
I don't see what the big deal is. What the hoopla is all about. You get basically bleeding edge performance without all the super bloated fantasy and magic bafoonery of dlss. You get blistering raster perf and a much lower tdp, no melting power connector, and lower price. What's the big problem?

I can see AMD sales crushing Nvidia on price alone. But fans love thier boys in the end.

anyways I have a 6900xt with waterblock sitting in a box. I moved to a new place and decided to use my 6800h amd/3070ti laptop for a long while. I may get a xtx and rebuild a desktop but then my $2000 Lenovo Legion will not get used so I'll pass and just enjoy my laptop with its smooth gsync to odyssey g7 connection.
AMD has their own "super bloated fantasy and magic bafoonery" in FidelityFX Super Resolution (FSR). In version 3 which they talked about during their presentation, they're even doing AI frame generation just like DLSS 3.0.
 
Brought to you by Nvidia advertising... We all got it, you only want to buy Nvidia
I'm in the same boat, when I upgrade my GPU it will only be for raytracing. When I look at benchmarks I will only be comparing raytraced games. DLSS is an added bonus, I guess.
Raster performance is meaningless to me at this point. As GPUs get faster, and raytracing becomes more ubiquitous, I think more people will fall into this category.
 
I'm in the same boat, when I upgrade my GPU it will only be for raytracing. When I look at benchmarks I will only be comparing raytraced games. DLSS is an added bonus, I guess.
Raster performance is meaningless to me at this point. As GPUs get faster, and raytracing becomes more ubiquitous, I think more people will fall into this category.

Just keep in mind people said that about physics engines and how we would need hardware to run it correctly. These days it done by software on the cpu. Just because were using dedicated hardware today doesn't mean we will need to in the future. Technology tends to suddenly shift and sometimes without warning, I imagine as Ray Tracing becomes a more mainstream item will find better ways to optimize it, that are less demanding on hardware. But were not even close to mainstream when you need a halo card to run Ray Tracing. Raster performance is still very important to most everyone still in the sub $500 video card world, which is most of the market.
 
I'm in the same boat, when I upgrade my GPU it will only be for raytracing. When I look at benchmarks I will only be comparing raytraced games. DLSS is an added bonus, I guess.
Raster performance is meaningless to me at this point. As GPUs get faster, and raytracing becomes more ubiquitous, I think more people will fall into this category.
We have been hearing that ray tracing is the future and every game will use it for 5 years now... and we are still around the 70 game mark. (technically you can count 10 or so more if your willing to accept things like a ray traced poker game and a handful of pre alpha steam games with 1 star user reviews as legit RT titles)

The way I see it... the consoles can do a very minimal amount of RT at a high cost... so I don't expect we will be seeing a ton of RT for awhile yet. I'm sure it will matter at some point, sure if everyone could run it RT makes a ton of sense. But all the "it saves developers so much time" BS is true only if they also didn't have to build a raster version for 99% of their actual customers. IMO RT is going to be a "its going to matter soon" feature for at least another 2 or 3 generations. Basically until Sony and MS update their consoles again with a new AMD solution... and then add another year or two for those consoles to actually populate. We are years away from game developers just making a RT version and that is all.
 
I'm in the same boat, when I upgrade my GPU it will only be for raytracing. When I look at benchmarks I will only be comparing raytraced games. DLSS is an added bonus, I guess.
Raster performance is meaningless to me at this point. As GPUs get faster, and raytracing becomes more ubiquitous, I think more people will fall into this category.
You will be in the small minority if only looking at ray tracing benchmarks.
 
You will be in the small minority if only looking at ray tracing benchmarks.
Honestly, that's fine. It's when people get defensive about this and make it sound like rasterization is already dead. Lol, no. Not even close. We just need to pop over to Steam and look at most played games and also most popular GPUs to see just how important ray tracing is to the current consumer base...
 
I got to thinking, which of these options will happen first:

1) Graphene leaves the laboratory

2) Nuclear Fusion Power is Widespread

3) Fully ray-traced AAA games playable at 60fps at FHD
 
Last edited:
Lol, that's silly. Who here even said they don't like ray tracing? The only debate is how important it actually is for the market. *edit* Currently.
It's very important for the market. RT is the future. If it's going to be adopted RT hardware and games are needed. It's an iterative process. If everyone had your viewpoint no one would buy RT hardware/games until it was perfect. Those that like it and want to see more of it will support the best versions of RT hardware/software.

NV has the best implementation currently and they are going to reap the benefits from those that like RT. This futher builds their mindshare.
 
3) Fully ray-traced games
You're too late on that one, as there were real-time ray-traced games years ago. I can't find any to link to now though, as search engines just put stuff about RTX at the top for any search about ray-tracing.
 
AMD has their own "super bloated fantasy and magic bafoonery" in FidelityFX Super Resolution (FSR). In version 3 which they talked about during their presentation, they're even doing AI frame generation just like DLSS 3.0.
Sure but they are fast in raster. Anyways who cares. Worship the leather jacket if you want. Worship red if you want. At the end of the day its your money to waste. Meanwhile I am trying to repair this 200 gallon transfer tank and get myself 200 gallons of offroad diesel in case it gets to $20 gallon. I have a hungry Kubota that I need to feed instead of a new GPU that is so unimportant in my life. And nope, not a fan boy batting for AMD. I am typing these useless words on an nVidia 3070ti Laptop GPU which is a nice fast zippy GPU. I am impressed. Its as fast as a desktop counterpart. Really impressive how much performance nV packed into a mobile GPU.
 
You're too late on that one, as there were real-time ray-traced games years ago. I can't find any to link to now though, as search engines just put stuff about RTX at the top for any search about ray-tracing.
Heh, something like: https://web.archive.org/web/2008101...per.com/article.php?aid=334&type=expert&pid=1

I'd be curious to know how it would perform on modern cpu's; a Core 2 Extreme QX6700 (4 cores from 2006) got 16.9 fps at 256*256, and the article claims it scales very well with adding cores. At a guess, around 100fps at 640x480 for a Ryzen 7950x?
 

Lol, that's silly. Who here even said they don't like ray tracing? The only debate is how important it actually is for the market. *edit* Currently.
The meme is funny....
But ya no one is saying RT isn't the future. The only argument is if we live in the future or not. lol

I live in the present/reality. The present reality is RT titles are non existent... there are a few good examples no doubt. I don't deny RT looks great in some titles. Its just a very small minority of games that even have it right now. All the talk about developers will just switch over to RT lighting and save time on game development ect... is such BS because 99% of the market can't run RT, or have cards that can barely run RT (not just AMD 2 generations of Nvidia cards don't run it very well either). Developers will be focusing on Raster lighting for a long time yet...

RT will become defacto at some point. Sure few doubt that. The question is does the first $1600 flag ship that can handle it make every developer focus on RT lighting over Raster shader lighting tricks ? Does it convince anyone to release a Crysis... that only runs on a few flagships ? Game developers learned that lesson. Super high fidelity games that make mid range hardware cry... don't sell. Sure they get used as benchmarks and people pretend they are the bestest games ever... but they don't sell.

RT will take off... when 6600/3050 class hardware and consoles can drive RT to acceptable frame rates. That seems like at least 2-3 generations off to me.
 
The meme is funny....
But ya no one is saying RT isn't the future. The only argument is if we live in the future or not. lol

I live in the present/reality. The present reality is RT titles are non existent... there are a few good examples no doubt. I don't deny RT looks great in some titles. Its just a very small minority of games that even have it right now. All the talk about developers will just switch over to RT lighting and save time on game development ect... is such BS because 99% of the market can't run RT, or have cards that can barely run RT (not just AMD 2 generations of Nvidia cards don't run it very well either). Developers will be focusing on Raster lighting for a long time yet...

RT will become defacto at some point. Sure few doubt that. The question is does the first $1600 flag ship that can handle it make every developer focus on RT lighting over Raster shader lighting tricks ? Does it convince anyone to release a Crysis... that only runs on a few flagships ? Game developers learned that lesson. Super high fidelity games that make mid range hardware cry... don't sell. Sure they get used as benchmarks and people pretend they are the bestest games ever... but they don't sell.

RT will take off... when 6600/3050 class hardware and consoles can drive RT to acceptable frame rates. That seems like at least 2-3 generations off to me.
Pretty much. I also never denied it's the future. My reply is always I'll care about it when the games I play use it. Oh, and not lazy stuff like Forza where it's photo only mode. Why would I care about that? There's a few, and I mean a FEW games that use it well because those are the only ones people actually talk about. To the point where for the 4090 launch they still have to use CP2077 which is old news by now.

People who only care about RT are an extreme minority and will remain so for a while.
 
The meme is funny....
But ya no one is saying RT isn't the future. The only argument is if we live in the future or not. lol

I live in the present/reality. The present reality is RT titles are non existent... there are a few good examples no doubt. I don't deny RT looks great in some titles. Its just a very small minority of games that even have it right now. All the talk about developers will just switch over to RT lighting and save time on game development ect... is such BS because 99% of the market can't run RT, or have cards that can barely run RT (not just AMD 2 generations of Nvidia cards don't run it very well either). Developers will be focusing on Raster lighting for a long time yet...

RT will become defacto at some point. Sure few doubt that. The question is does the first $1600 flag ship that can handle it make every developer focus on RT lighting over Raster shader lighting tricks ? Does it convince anyone to release a Crysis... that only runs on a few flagships ? Game developers learned that lesson. Super high fidelity games that make mid range hardware cry... don't sell. Sure they get used as benchmarks and people pretend they are the bestest games ever... but they don't sell.

RT will take off... when 6600/3050 class hardware and consoles can drive RT to acceptable frame rates. That seems like at least 2-3 generations off to me.
You've pretty much hit the nail on the head here. The hardware literally has to be there and be fully capable and powerful enough on the mid to low end of the spectrum for RT to be successfully adopted widespread. And it needs to be capable without anything like FSR or DLSS. FSR and DLSS won't go away because they'll need to be kept around as crutches for when the hardware on the mid to low end is no longer capable of pushing new games well enough; likely after a new generation of hardware is released.

This is how it has always been because there's no point in making games that 90%+ of your customers cannot run well. At the current time RT hardware is nowhere near being capable of running RT except for the very highest end of cards. It will be a minimum of two and more likely three additional generations before the hardware gets to the crucial point. Even then it's still going to take longer because the capable hardware needs to be almost ubiquitous. Look at how many people are still running nVidia 10x0 class cards as well as Radeon RX 5x0 class cards. Those are the people that must have capable RT hardware before you can consider RT mainstream enough for games to properly use it.

When nVidia released the RTX cards it was obvious that it was going to take multiple generations before RT was going to be anything more than a gimmick. I figured it would need to be a minimum of three generations. nVidia has released the halo card of their third generation and we're still not seeing RT as anything but a gimmick. I agree it will need to be a minimum of two and possibly up to four more generations before RT truly matters. This isn't helped with both GPU makers essentially abandoning the low end and low midrange portions; expecting previous generations of hardware to be used for that. It may not be cost effective to target those ranges with new hardware anymore but the result is a large portion of the market is stagnant.
 
You've pretty much hit the nail on the head here. The hardware literally has to be there and be fully capable and powerful enough on the mid to low end of the spectrum for RT to be successfully adopted widespread. And it needs to be capable without anything like FSR or DLSS. FSR and DLSS won't go away because they'll need to be kept around as crutches for when the hardware on the mid to low end is no longer capable of pushing new games well enough; likely after a new generation of hardware is released.

This is how it has always been because there's no point in making games that 90%+ of your customers cannot run well. At the current time RT hardware is nowhere near being capable of running RT except for the very highest end of cards. It will be a minimum of two and more likely three additional generations before the hardware gets to the crucial point. Even then it's still going to take longer because the capable hardware needs to be almost ubiquitous. Look at how many people are still running nVidia 10x0 class cards as well as Radeon RX 5x0 class cards. Those are the people that must have capable RT hardware before you can consider RT mainstream enough for games to properly use it.

When nVidia released the RTX cards it was obvious that it was going to take multiple generations before RT was going to be anything more than a gimmick. I figured it would need to be a minimum of three generations. nVidia has released the halo card of their third generation and we're still not seeing RT as anything but a gimmick. I agree it will need to be a minimum of two and possibly up to four more generations before RT truly matters. This isn't helped with both GPU makers essentially abandoning the low end and low midrange portions; expecting previous generations of hardware to be used for that. It may not be cost effective to target those ranges with new hardware anymore but the result is a large portion of the market is stagnant.
Until it's able to be done on the consoles it's not here, they are still the target for most of what gets released, and PC they are going to look at Steam hardware surveys and plan accordingly. Between consoles and the fact like 50% of Steam users are not able to run Ray Traced effects at a practical level sadly leaves RTX as a "Premium Feature". So it will get as much attention as some accounting department determines is necessary to attract additional sales, or as the development tools advance that will drag Ray Tracing along, but how much they choose to implement and tone down will be based on target sale audiences, and given where consoles are at...

There are a number of things in play here, COVID and the Mining Bubble really messed up user adoption rates of new cards.
Secondly, AMD made some promises to Microsoft and Sony about Ray Tracing on the consoles that have been problematic, to say the least, Xbox X|S does have some titles that have minor Ray Traced assets and effects in a select number of titles
https://knowtechie.com/which-xbox-series-xs-games-have-ray-tracing/
But it is a short list and the developers ultimately decided the eye candy wasn't worth the headaches they caused to reach the needed framerates.
 
Hate to burst anyone's bubble, but let's wake up from fantasy land. AMD will not be crushing Nvidia's sales this gen, as always they won't even come close.

View attachment 524615
Let's be real. The amount of people willing to pay $999 for a Radeon card is extremely slim, even if it beats the RTX 4080. If AMD really wanted market share, they would price the 7900XTX at around $699. They seem to be okay with their position as second place in GPU market, which goes back to this meme: Is Lisa Su really Jensen Huang's niece??? LOL. I think at this point, only Intel can disrupt the GPU market once they reach Battle Mage or beyond.
 
Let's be real. The amount of people willing to pay $999 for a Radeon card is extremely slim, even if it beats the RTX 4080. If AMD really wanted market share, they would price the 7900XTX at around $699. They seem to be okay with their position as second place in GPU market, which goes back to this meme: Is Lisa Su really Jensen Huang's niece??? LOL. I think at this point, only Intel can disrupt the GPU market once they reach Battle Mage or beyond.
It comes down to an allocation of Silicon, AMD is TSMC's 3 or 4'th largest customer, but AMD has to make GPUs, CPUs, FPGAs, and Consoles. AMD's Chiplet designs let them get better yields and they can mix and match processes to drive costs down but ultimately they have a lot of ground to cover and a relatively limited amount of silicon to do it with.
For the GPU space, AMD is less about gaining market share and more about keeping a gaming presence, AMD's silicon is better allocated to their agreements with Sony, Microsft, and the Enterprise Market.
AMD might sell every GPU they make, but they need to sell 10 GPUs to make the same amount of profit they would from one EPYC, and AMD last year was turning Enterprise customers away, customers were forced to buy Xeons instead because AMD was not able to meet the demand for their EPYC lineup and that is a big "loss" for AMD.
So yes the new Chiplet designs for their GPUs should get great yields, and be very cost-effective, but from an accounting perspective, that saved silicon should mostly be redirected to the Enterprise market and not the consumer one.
 
Let's be real. The amount of people willing to pay $999 for a Radeon card is extremely slim, even if it beats the RTX 4080. If AMD really wanted market share, they would price the 7900XTX at around $699. They seem to be okay with their position as second place in GPU market, which goes back to this meme: Is Lisa Su really Jensen Huang's niece??? LOL. I think at this point, only Intel can disrupt the GPU market once they reach Battle Mage or beyond.
Your not wrong about disrupting the market leader.
I think we should probably wait for benchmarks a bit though. We don't know how much the 7900 will beat the 4080. It seems like it will, we don't even know that for sure.
I mean how crazy do things need to get before people bite on AMD ? If 90% of the time a 7900XTX is within single digits of a 4090... and beats a 4080 by double digits in basically everything not counting a couple RT titles for $200 less. I think people start biting at that point if they are on NV 2000s and older.

I think shifts in market share can happen pretty quickly. And 90+% of the market is non flag ship parts. All the holding off on really mid range next gen from both NV and AMD is annoying. What AMD really needs to gain market share is a 7800 and 7700. Come in with those cards at $700/500 and put the screws on the last gen. I wish AMD had done that and just said fine we are going to have to blow out a bunch of 6000 cards and loose some money.

I agree with you, as much as AMDs tech is revolutionary. Alchemist sucking really sucks... hopefully BM does kick NV and AMD in the backside. I just hope Intel has learned enough lessons to do that.
 
Back
Top