AMD’s upcoming flagship GPUs should be 3x faster than RX 6900XT

Apple alone has something like 50% of TSMC's 5nm output, Intel has a significant portion as well, then pile in Nvidia, AMD, and the numerous amounts of Chinese companies who have their own GPUs being launched next year on TSMC 5nm, TSMC's 5nm is far more constrained than their 7nm is regardless of them not moving their CPU's over. Yes TSMC is ramping up their 5nm with an expected completion date of 2025 on the Arizona and Germany facilities. The transition over to MCM is not the golden bullet you think it is, MCM simplifies things and allows for easier to build designs with fewer failures, that's great, but you still need silicon to print them on and that is where AMD is going to be hurting for supply.

AMD has settled very comfortably into a position of a boutique hardware designer, they make limited-run, high-performance parts, at a maximum margin. It's a good thing for them, they will be making more parts than ever but don't expect them to be overly present in retail channels.

We obviously see the situation from two different perspectives. 5nm capacity will continue to grow, it's the current leading node at TSMC and has capacity coming online regularly, not just in 4 years.

Again, look at cpu availability and pricing. MSRP and available at any retailer. That's the power of having an 80 mm2 die vs the 519 mm2 Navi die. That's over 6 zen dies for each 6900/6800 and then toss in whatever yield issues there might be.

That advantage, being able to pick and choose which process is right for each component, will have a definite effect.

Hell, you don't even need to have each component made at the same fab. Cores on 5nm, cache on 7nm or Samsung 8nm, IO die with the same options, or even GF/Samsung 14nm.

The availability and performance of Zen 3 should really give a big clue to how this will all play out, even with all those different skus on 7nm, cpu availability only blipped last year vs console and GPUs.

Small die size to enable maximum # of parts per wafer with low reject rate and fab agility for components will have a positive effect on GPU and CPU production.

Whether that will be enough for consumer demand and crypto........¯\_(ツ)_/¯
 
We obviously see the situation from two different perspectives. 5nm capacity will continue to grow, it's the current leading node at TSMC and has capacity coming online regularly, not just in 4 years.

Again, look at cpu availability and pricing. MSRP and available at any retailer. That's the power of having an 80 mm2 die vs the 519 mm2 Navi die. That's over 6 zen dies for each 6900/6800 and then toss in whatever yield issues there might be.

That advantage, being able to pick and choose which process is right for each component, will have a definite effect.

Hell, you don't even need to have each component made at the same fab. Cores on 5nm, cache on 7nm or Samsung 8nm, IO die with the same options, or even GF/Samsung 14nm.

The availability and performance of Zen 3 should really give a big clue to how this will all play out, even with all those different skus on 7nm, cpu availability only blipped last year vs console and GPUs.

Small die size to enable maximum # of parts per wafer with low reject rate and fab agility for components will have a positive effect on GPU and CPU production.

Whether that will be enough for consumer demand and crypto........¯\_(ツ)_/¯
We'll see, I hope you're right on this because I am very interested in what will likely be the 7700XT as I am going to be at 1440p for the foreseeable future (until this monitor dies) so I am looking to decrease my office's thermals and getting rid of an OC'd 2080TI will go a long ways towards doing that.
 
https://www.techpowerup.com/269347/...gpus-built-on-samsung-8nm-instead-of-tsmc-7nm

https://www.techpowerup.com/266351/...w-density-metric-for-semiconductor-technology

Comparing apples to oranges to bananas, Samsung 8nm and TSMC 7nm are roughly the same density as Intel 10mm. Navi2 and Ampere are at process parity.

Navi and Ampere are not at process parity at all. Look at the density of Navi 21 vs GA102, 51.5MTr/mm2 vs 41.1. But that's not all, the TSMC N7P that RDNA 2 is a much more power efficient node than the original N7 process used in RDNA 1. And it's the original N7 process that the Samsung 8nm process is closest to.

But, you kind of made my point. The poster that I replied to above was comparing apples to bananas and using that comparison to show that AMD would be the most power efficient going forward. We can't use this generation to forecast anything in regards to power use in the future. It will be a different story when they are both using the same TSMC process.
 
My assumption, based on rumors and my own speculation, is that it is transparent to the software. So from an API perspective, it IS one GPU but the hardware and driver does the magic under the hood.

Which, if true, would be a huge revolution, as big as multi-core CPUs. We could essentially double (or more, depending on how many chips they pack in there) framerates without any developer work, or traditional problems with AFR rendering.
Remember when DX12 was supposed to take care of this. We were supposed to have the ability to plug in multiple different graphics card and DX12 was going to combine their power. That never really got support from the GPU makers since it was on them to code for it. Hopefully, the on-card stuff will work.
 
Again, look at cpu availability and pricing. MSRP and available at any retailer.
Look at CPU unit sales. On the consumer side, in any given quarter, AMD moves 20x as many GPUs as they do CPUs. There's a reason that AMD no longer breaks out the two in their reporting.
 
Look at CPU unit sales. On the consumer side, in any given quarter, AMD moves 20x as many GPUs as they do CPUs. There's a reason that AMD no longer breaks out the two in their reporting.

Which is it? We know they sell 20x as many GPUs as CPUs or they don't break out the two in their annual reporting??

Since they don't break out sales in the computing and graphics division, we're left to look at market share and infer.

AMD consumer (and server) x86 market share has been increasing for years, and is at it's highest level since 2006.

Graphics market share is sub 20% at this point.

In addition, they make one 80mm2 die that's used in every CPU (not apu) they sell, desktop or server. Products that start at $300 for a single cut down die.

The same wafer space for 6 of those gets you one Navi 21 die, for a card that has an MSRP of $999.

You can make and sell 6 $300 (minimum) CPUs for the same sale price as 1 $999 (max) GPU.

AMD is definitely prioritizing the products that make them the most money, and it's not GPUs. They'd rather sell small, cheap, easy to produce CPUs, and the availability shows that.
 
Remember when DX12 was supposed to take care of this. We were supposed to have the ability to plug in multiple different graphics card and DX12 was going to combine their power. That never really got support from the GPU makers since it was on them to code for it. Hopefully, the on-card stuff will work.
Yes, and it worked. It's call mGPU and the API is fully functional. It was only used in a few games, but Rise of the Tomb Raider (for example) had almost double FPS when using it. So I think that it works fine and is still supported.

The problem was that it required significant code changes on the engine level, and was costly to implement for developers (especially considering the waning multi-GPU market share). So the main issue was that it was too complicated and required dedicated work.

The idea here is that the hardware itself would control the multi-GPU and at the driver level it would appear as 1 GPU. This is not an easy thing, but I think it is possible. GPUs are already massively parallel, so when you code graphics you already have to take multi-processing into account.

So I think it could work, and if AMD figures it out it will be huge.
 
I’d be happy if my next graphics card used less than the 400+w my 3090 does. If it ended up using more I’m gonna have to do some thinking.

Sick of how hot my study gets
Same. My office gets stupid hot when the wife and I are both gaming. And I keep the house a nice and chilly 72F in the summer. Unfortunately here in San Antonio our December is still 80F outside.
 
Same. My office gets stupid hot when the wife and I are both gaming. And I keep the house a nice and chilly 72F in the summer. Unfortunately here in San Antonio our December is still 80F outside.
That's a feature, not a bug.
 
Which is it? We know they sell 20x as many GPUs as CPUs or they don't break out the two in their annual reporting??
There are sources other than their registered quarterly & annual reporting, and they all point to about a 20x gap in unit sales.

Based on your post, the GPUs are major money losers and the entire lineup should be killed off. That's not a position I would bet in favor of.
 
Same. My office gets stupid hot when the wife and I are both gaming. And I keep the house a nice and chilly 72F in the summer. Unfortunately here in San Antonio our December is still 80F outside.
Put a bathroom exhaust fan in your case, run flexible duct outside, to your crawlspace or garage. They have fans that are quiet and efficient.

If you don't have the man skills to do that, you should spend less time gaming. Your wife would agree, probably
 
There are sources other than their registered quarterly & annual reporting, and they all point to about a 20x gap in unit sales.

Based on your post, the GPUs are major money losers and the entire lineup should be killed off. That's not a position I would bet in favor of.

Great. Post them up. I would love to see numbers suggesting they sell 20x more GPUs than CPUs.

I said that make more from CPU from each cpu sold, not that they lose money on GPU sales. They're not mutually exclusive.

Last year's reported cost for a 7nm wafer was between $7000 and $9000. You get 518 zen dies per wafer, and 80 Navi 21 dies per wafer. That's a cost of $13.50-$17.37 per Zen3 die and $87.5-$112.5 per Navi 21 die. Ignoring the much lower cost for the rest of the cpu (substrate and IO die) vs an entire GPU, lets look at profit per wafer in a worst case CPU (5900x, uses two chiplets per cpu) vs best case GPU (6900xt) situation, just dirty napkin math.

5900x - $549 MSRP - 549*259-(17.37*518) = $133193
6900XT - $999 MSRP - 999*80-($87.5*80) = $72,920

There is no situation where AMD doesn't prioritize Zen3 core production, that's the money maker for them. The math skews more towards CPUs in every other combination of SKUs. There's lower risk, higher profit per sku, and they sell directly to retailers vs selling a GPU chip to AIBs meaning that higher profit per sku comes right back to the company.
 
Last edited:
AMD might be selling a ton of chips to AiB but I don't see AMD cards moving. My MC AMD case is packed with all 6xxx cards. NE and amazon both got stock of everything. They are not selling cause of AiB ridiculous pricing. I really think the AiN are shooting themselves in the foot with their greed. $900 for 6700xt is outrageous.
 
Remember when DX12 was supposed to take care of this. We were supposed to have the ability to plug in multiple different graphics card and DX12 was going to combine their power. That never really got support from the GPU makers since it was on them to code for it. Hopefully, the on-card stuff will work.
DX12 does, but it is on game developers to program for it and it is a pain in the ass managing all the memory registers and tracking code consumption. It’s not worth it in the consumer space, because multi GPU users make up such a minute fraction of the already niche space that it just isn’t worth the time or money to do. It is used in the professional space in both DX12 and Vulkan environments though.
 
AMD might be selling a ton of chips to AiB but I don't see AMD cards moving. My MC AMD case is packed with all 6xxx cards. NE and amazon both got stock of everything. They are not selling cause of AiB ridiculous pricing. I really think the AiN are shooting themselves in the foot with their greed. $900 for 6700xt is outrageous.

nailed it
 
Yeah, Amazon has stock, but look at the prices. This is supposed to be a $649 dollar MSRP card. Of course people aren't bending over backwards to buy it.

1640130207048.png
 
Yeah, Amazon has stock, but look at the prices. This is supposed to be a $649 dollar MSRP card. Of course people aren't bending over backwards to buy it.

View attachment 424106
My coworker was looking at 80s VWs online the other day. There was an 85 Jetta diesel (non-turbo) that was pretty clean. However, it had 300k on the clock and was priced at $7500. Probably still a better deal than those cards listed above.
 
Put a bathroom exhaust fan in your case, run flexible duct outside, to your crawlspace or garage. They have fans that are quiet and efficient.

If you don't have the man skills to do that, you should spend less time gaming. Your wife would agree, probably
This is dumb if you have adequate air conditioning for the load in your room/house. Plus bathroom fans are typically 100cfm or less, some up to 300 cfm but they aren't so quiet. To solve this myself when my air conditioners crapped out I had the upstairs 3 ton replaced with a 4 ton, and a 1 ton portable dual hose turd as a peaker in the small room that has a server and two gaming rigs in it. It works great, plus now I have a nice even 8 tons of permanently installed air conditioning instead of 7.

I have thought about using in-wall fans (very popular with people that heat with wood in houses not explicitly built to be heated this way) in the winter to blow heat at the adjacent offices.
 
Great. Post them up. I would love to see numbers suggesting they sell 20x more GPUs than CPUs.
HardwareTimes came up with 20x the wafer space for GPUs: https://www.hardwaretimes.com/1-mil...f-amds-7nm-capacity-at-tsmc-3-4-for-big-navi/

Other sources point to 15-25x unit sales. You can google this yourself.

The math skews more towards CPUs in every other combination of SKUs
Your math is just a wafer cost analysis that compares a component with a boxed finished good using the MSRP of finished goods for both. It ignores the huge margin differences between the AIB GPU, console GPU, and retail distribution business models. It also ignores how much more expensive those CPU wafers would be without the 20x GPU volume there to bring prices down.
 
I’d be happy if my next graphics card used less than the 400+w my 3090 does. If it ended up using more I’m gonna have to do some thinking.

Sick of how hot my study gets
My PCs are in the basement and I just run display cables upstairs. Also makes the room silent and for cooling I leave the side panel off.

Setups like mine solve a lot of issues with heat and noise… especially for a card talked about in this thread.
 
HardwareTimes came up with 20x the wafer space for GPUs: https://www.hardwaretimes.com/1-mil...f-amds-7nm-capacity-at-tsmc-3-4-for-big-navi/

Other sources point to 15-25x unit sales. You can google this yourself.


Your math is just a wafer cost analysis that compares a component with a boxed finished good using the MSRP of finished goods for both. It ignores the huge margin differences between the AIB GPU, console GPU, and retail distribution business models. It also ignores how much more expensive those CPU wafers would be without the 20x GPU volume there to bring prices down.

That's great. The article you linked claims that

1 - AMD used 101 million mm2 of their 7nm allotment to manufacture zen 2/3 chiplets, totaling 2 million CPUs

2 - AMD used 131 million mm2 of their 7nm allotment to manufacture Navi 21 dies, totaling 250k GPUs.

That's all complete supposition, excludes servers and still shows cpus being produced 10:1 over GPUs. Again, that article shows more wafers allotted to series s than Navi dies, if AMD is grateful for a bulk discount, it's because of console sales.

Second, yes that's what napkin math usually indicates. The plain wafer cost analysis is the best case for your argument, or are you really trying to claim the the cost to compete a cpu from a chiplet is more than the cost to compete a video card from a GPU die??

Look over the numbers again real close and tell me how in the world you think that gpu sales are subsidizing CPU sales or are the money maker.
 
Yeah, Amazon has stock, but look at the prices. This is supposed to be a $649 dollar MSRP card. Of course people aren't bending over backwards to buy it.

View attachment 424106
Those are AiB MSRP. MC has them a little cheaper but like said earlier then sit. Any and all RTX cards don't last the day. Other then 3080s. I haven't seen one of those for months. Seems 3090 have wind down production too since they have become pretty rare. They get tons of everything else tho each week.
 
I understand. We all know AMD is not the strongest brand for GPUs (deserved or not) and the new cards aren't good for ray tracing and don't have DLSS. FSR is great, but also works on Nvidia.

My 6800 XT is pretty nice, I like it, and for standard games performance is pretty good. Ray tracing is a joke though, like 15 fps in Cyberpunk 2077 (even on low settings). And DLSS is a good deal better than FSR.

I used to have a 2080 Ti, and it might have actually been slightly faster than the 6800 XT. But AMD works a lot better on Linux, so that is why I made the switch. If I was on Windows still I would have kept the 2080 Ti.
 
Put a bathroom exhaust fan in your case, run flexible duct outside, to your crawlspace or garage. They have fans that are quiet and efficient.

If you don't have the man skills to do that, you should spend less time gaming. Your wife would agree, probably
Yeah no. Not only would that look ridiculous it wouldn't work well. Especially not how my house is setup.
This is dumb if you have adequate air conditioning for the load in your room/house. Plus bathroom fans are typically 100cfm or less, some up to 300 cfm but they aren't so quiet. To solve this myself when my air conditioners crapped out I had the upstairs 3 ton replaced with a 4 ton, and a 1 ton portable dual hose turd as a peaker in the small room that has a server and two gaming rigs in it. It works great, plus now I have a nice even 8 tons of permanently installed air conditioning instead of 7.

I have thought about using in-wall fans (very popular with people that heat with wood in houses not explicitly built to be heated this way) in the winter to blow heat at the adjacent offices.
Yeah my unit cools the home well, but office is upstairs. Thermostat is downstairs. I've experimented with portable AC units in the office but they're loud and an eyesore. Best solution would be 2 thermostats and 2 units, but it's not a big enough issue to shell out that much money redo the hvac in the house.
 
This is dumb if you have adequate air conditioning for the load in your room/house. Plus bathroom fans are typically 100cfm or less, some up to 300 cfm but they aren't so quiet. To solve this myself when my air conditioners crapped out I had the upstairs 3 ton replaced with a 4 ton, and a 1 ton portable dual hose turd as a peaker in the small room that has a server and two gaming rigs in it. It works great, plus now I have a nice even 8 tons of permanently installed air conditioning instead of 7.

I have thought about using in-wall fans (very popular with people that heat with wood in houses not explicitly built to be heated this way) in the winter to blow heat at the adjacent offices.
How does 4 ton + 1 ton = 8 ton?

If you exhausted all of the hot air coming from your computer outside of your air conditioned area, your air conditioners would not have to work as hard.
Granted, that exhausted air has to be replaced, but if outside air is cooler than your pc air, its a win.
Yes the fan uses electricity, but it takes less energy to push a gas than it takes to compress it.

You can't fool mother nature!
 
Again, that article shows more wafers allotted to series s than Navi dies, if AMD is grateful for a bulk discount, it's because of console sales.
The article shows 20x as many wafers going into GPUs than into CPUs. You just refuse to count the console GPUs because then you look very very wrong. That's deep into "50% of the time, it works every time" territory.
The plain wafer cost analysis is the best case for your argument, or are you really trying to claim the the cost to compete a cpu from a chiplet is more than the cost to compete a video card from a GPU die??
Yes, that is correct. It always costs more to do something (complete a CPU) than to not do something (complete a GPU from a chip sold to an AIB).

How much of MSRP ends up going to AMD? When selling chips to AIBs, 100% of the chip price comes back to AMD since it's a direct sale. How about a retail boxed CPU? 50-70%. That's because the CPU is sold to a distributor who then sells to the reseller who then sells to the end user. Accounting for this key difference brings the numbers much closer.

There are other major differences in the business models which, again, change the numbers dramatically from what you had posted. One of them is that nearly 100% of GPU dies are sold before they're made. CPU dies, on the other hand, are not pre-sold and will usually sit on the shelf for 1-2 months (in a hot market, which this is not) after the 1-2 months it took to get them on the shelf in the first place.

It's weird that the product line which uses 95% of their wafers and was released first and is most in need of development compared to the competition is the lower priority line. The last bit might even be the most important one. On the CPU side, AMD is ahead of Intel and can afford to let off the throttle if they have to. On the GPU side, AMD is playing catchup to Nvidia and doesn't have a next-gen console opportunity for their next-gen GPUs. If they're not going to exit the GPU game, then they need to prioritize that business until they reach parity with Nvidia or Intel reaches parity on CPUs.

But hey, maybe you're right about everything here. That would certainly explain why AMD loses money every quarter.

Kinda weird that AMD released their "top priority" last though, right?
 
If you exhausted all of the hot air coming from your computer outside of your air conditioned area, your air conditioners would not have to work as hard.
Granted, that exhausted air has to be replaced, but if outside air is cooler than your pc air, its a win.
Yes the fan uses electricity, but it takes less energy to push a gas than it takes to compress it.

You can't fool mother nature!
^^ This guy speaks the truth. It's why data centers are set up with hot aisles & cold aisles. The catch is that replicating that at home ends up being either really ugly or very expensive. Difficult to justify for gaming (IMO) and still not easy for a home render farm unless it's all rackmount gear.
 
The article shows 20x as many wafers going into GPUs than into CPUs. You just refuse to count the console GPUs because then you look very very wrong. That's deep into "50% of the time, it works every time" territory.

Yes, that is correct. It always costs more to do something (complete a CPU) than to not do something (complete a GPU from a chip sold to an AIB).

How much of MSRP ends up going to AMD? When selling chips to AIBs, 100% of the chip price comes back to AMD since it's a direct sale. How about a retail boxed CPU? 50-70%. That's because the CPU is sold to a distributor who then sells to the reseller who then sells to the end user. Accounting for this key difference brings the numbers much closer.

There are other major differences in the business models which, again, change the numbers dramatically from what you had posted. One of them is that nearly 100% of GPU dies are sold before they're made. CPU dies, on the other hand, are not pre-sold and will usually sit on the shelf for 1-2 months (in a hot market, which this is not) after the 1-2 months it took to get them on the shelf in the first place.

It's weird that the product line which uses 95% of their wafers and was released first and is most in need of development compared to the competition is the lower priority line. The last bit might even be the most important one. On the CPU side, AMD is ahead of Intel and can afford to let off the throttle if they have to. On the GPU side, AMD is playing catchup to Nvidia and doesn't have a next-gen console opportunity for their next-gen GPUs. If they're not going to exit the GPU game, then they need to prioritize that business until they reach parity with Nvidia or Intel reaches parity on CPUs.

But hey, maybe you're right about everything here. That would certainly explain why AMD loses money every quarter.

Kinda weird that AMD released their "top priority" last though, right?

Because consoles aren't GPUs. They're APUs or SOCs, whatever you prefer. You're in a thread about literal PC GPUs, you can't pop, in toss down some stupid ass comment, and then change what's being discussed to fit your point.

So, again, consoles aren't GPUs, you're not in a console thread. As far as what's released first, yes, AMD has a commitment to Sony and Microsoft to design, build and SUPPLY APUs for their consoles and has to meet those commitments to avoid penalties. Those releases are timed for the biggest shopping season of the year, Nov-Dec and require chips to be supplied for assembly months in advance. Yes, they get priority for a number of reasons.

If you actually read the rest of the thread and were interested in the actual topic, the console APUs being on 7nm is a plus, because when they move their PC parts over to 5nm, the consoles will stay on 7nm, and have extra capacity to absorb, in addition to what TSMC is adding currently.

As far as profits per GPU vs CPU, If you stick to actual GPUs and what this thread is about, 100% of the chip cost comes back, sure for a part that costs 5x as much and has to fit within a much larger BOM, and sell inside of a product that sells for 2x as much.

To sell a useable CPU you need 1-2 chiplets, and IO die, substrate and assembly. To make a useable GPU to sell to AIBs you need 1 Navi Die, substrate and packaging. The Cost of zen3 dies, plus an IO die is less than the cost of a single Navi21 die. AMD ends up with a completed CPU to sell for less than the cost to package a GPU die for sale to an AIB.

Those chips sitting on a shelf are paid for by the retailer or distributor, and we know they're selling because AMDs has been continuing to increase. Having stock doesn't mean things are collecting dust.

Also, Zen 3 launch date Nov 3 2020, RX6000 launch date Nov 18 2020, so yes, AMD did prioritize which of their products was going to launch first.
 
Last edited:
There are sources other than their registered quarterly & annual reporting, and they all point to about a 20x gap in unit sales.

Consider how low their GPU sales have been, that would be exceptional, what are those source ?

If you are talking about those:
https://www.hardwaretimes.com/amd-u...yzenradeon-chips-in-the-last-6-months-approx/

  • PS5: 137 dies per wafer
  • Xbox Series X: 117 dies per wafer
  • Xbox Series S: 214 dies per wafer
  • Zen 3 chiplet: 520 dies per wafer
  • Renoir/Cezanne: 256/240 (248 avg) dies per wafer
  • Navi 21: 81 dies per wafer

Wafers dedicated to each processor:

  • PS5: 7.8 million dies at 137 per wafer ~56,934 wafers.
  • Xbox Series X: 3 million dies at 117 per wafer ~ 25,641 wafers
  • Series S: 1.5 million dies at 214 per wafer ~ 7,009 wafers
  • Zen 3: 3,365 (R5/R7) + 2,884 (R9)= 6,249 wafers ; (1.75 million R5/R7, 525K R9 SoCs)
  • Zen 2: 3,000-4,000 wafers (approx)
  • Renoir/Cezanne: 500K dies at 248 per wafer ~ 2,016 wafers
  • Navi 21: 500K dies at 81 per wafer ~ 6,172 wafers
Excluding consoles and APU (that are both, cpu and gpu), that more than 10 cpu for each GPU I think, if we count everything so many units made are both (CPU and GPU) that the ratio will get closer to 1:1
 
Last edited:
Yeah, Amazon has stock, but look at the prices. This is supposed to be a $649 dollar MSRP card. Of course people aren't bending over backwards to buy it.

View attachment 424106
The worst thing is that these cards aren't even suppose to be $649. How many 256-bit AMD cards in the past were $649? At some point AMD will make a 512-bit version like the R9 390 or a HBM based version like the Vega cards. The 6800 XT doesn't even use GDDR6x but uses regular GDDR6. This is a mid range $250-$300 graphics card that AMD can afford to add another $300 during a pandemic where that same card is sold for $1600 in retail stores. These cards are literally printing money for some people in the form of crypto and people aren't about to give that up for a bunch of nerds who wanna play video games. The market needs to respond.
 
No, I think the MSRP is fair. That is the card I have and it's great. It's a little slower than a 2080 Ti (but close, except for ray tracing). So I think in the $600 dollar range is about reasonable.

And AIBs will add on top, so in normal times maybe you would see these cards for like $749 or whatever, which is still okay considering the performance. At getting close to $2K it's just crazy. It's not worth $2K.
 
The worst thing is that these cards aren't even suppose to be $649. How many 256-bit AMD cards in the past were $649? At some point AMD will make a 512-bit version like the R9 390 or a HBM based version like the Vega cards. The 6800 XT doesn't even use GDDR6x but uses regular GDDR6. This is a mid range $250-$300 graphics card that AMD can afford to add another $300 during a pandemic where that same card is sold for $1600 in retail stores. These cards are literally printing money for some people in the form of crypto and people aren't about to give that up for a bunch of nerds who wanna play video games. The market needs to respond.
Sort of has been, by the next GPU launch Ethereum 2.0 should be in full swing and that will take out a large chunk of the GPU mining efforts.
 
Back
Top