The 6500xt and 6400: Meet your next gpu, PC gaming peasants.

Maybe this doesn't matter purely in a " I just want to play some games" argument but does the lack of some modern and possibly future encoding/decoding formats bother anyone?

AV1 seems like a big one for using the GPU after its gaming days are done.

Supported Rendering Format
HDMI™ 4K Support
Yes
4K H264 Decode
Yes
4K H264 Encode
No
H265/HEVC Decode
Yes
H265/HEVC Encode
No
AV1 Decode
No
 
Maybe this doesn't matter purely in a " I just want to play some games" argument but does the lack of some modern and possibly future encoding/decoding formats bother anyone?
Almost the complete opposite for me, cheaper and less heat the GPU make the more important those can tend to become for an Interest in a gpu, Intel focus on that aspect for their iGPU is a really good decision for example (and is the only thing I look about them, monitor and/or encoding/decoding support).

If I just want to play game I do not buy a 6400. If I want a cheaper GPU for a media server or a HTPC, maybe I do.
 
The reason I left AMD was very buggy drivers playing Half Life 2, I got a 6800 GT instead and never looked back. The only reason I got the 6900 XT is because the Nvidia cards were so damn difficult to get. Overall the AMD drivers are better but still can be buggy, also they are loaded with tons of crap i don't need.
Ive been on team ATI / AMD since 4850. Never had problems with drivers , bugs or glitches that i can remember. Then again i was always drawn to highly optimized shooters or mmos. I have a 6900xt right now mining to earn back some of the cost. I didnt want this card, i honestly wanted a 6700xt for around $650-700. Its only drawing 147 watts, when eth switches to POS plan to play some DCS in VR. If eth never switches, ill just game on my old R9 Furys. Maybe I am getting old, but I see almost no difference gaming between an ATI Firepro V7800 and my new 6900XT. I am on a 31" 1440p 144hz monitor. Firepro card was able to run my games at 80-95fps sometimes medium settings but the quality wasnt a big difference to me.

Graphics dont make a game fun, the story and mechanics of the game make a much bigger difference for me. If it was not for my interest in VR, i dont think id need to upgrade for a long time. (still waiting on a title worth buying though) Robo recall worked fine on my laptops rx 580
 
die size of Navi 24 is 107 mm²

Apparently this works out to $20 per GPU chip assuming wafer cost of 10k per 6nm wafer


Charlie Demerjian (@CDemerjian) Tweeted:
@no_one180 Got it directly from AMD, "Process, die size, and TDP of the 6500XT?

The AMD Radeon RX 6500 XT graphics card is built using the 6nm process technology and a 107mm2 die."
Sorry about the formatting.
https://twitter.com/CDemerjian/status/1480597930780229633?s=20
 
Not shockingly....https://www.techpowerup.com/290756/amd-radeon-rx-6500-xt-real-world-pricing-closer-to-usd-300

It gets worse and worse for this card:
-x4 interface
-no AV1 or H264/h265 encode
-paltry 64 bit 4 GB setup meant to deter miners still has inflated prices.

If one wants to low level gaming and there is absolutely nothing else out there for a reasonable price, you would be better off trading for a 5600g/5700g to hold you over. Those APUs will likely have better resale than this turd.
 
Not shockingly....https://www.techpowerup.com/290756/amd-radeon-rx-6500-xt-real-world-pricing-closer-to-usd-300

It gets worse and worse for this card:
-x4 interface
If it's PCIe 4.0, that's equivalent to 8 lanes of 3.0, and plenty for a 1080p card. Conveniently, the pcie spec it uses is left off the product page and I haven't seen it mentioned in any news articles...
 
If it's PCIe 4.0, that's equivalent to 8 lanes of 3.0, and plenty for a 1080p card. Conveniently, the pcie spec it uses is left off the product page and I haven't seen it mentioned in any news articles...
It would still be 4 lanes of 3.0 for those using pci-e 3.0. That should be equivalent to 8 lanes at 2.0 for older motherboard users.

The much more powerful 2080ti lost 9% with that bandwidth. Not sure if this will lose less because it is a much weaker card or more since it has a much smaller memory pool and a fraction of the internal bandwidth.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/6.htm
 
It would still be 4 lanes of 3.0 for those using pci-e 3.0. That should be equivalent to 8 lanes at 2.0 for older motherboard users.

The much more powerful 2080ti lost 9% with that bandwidth. Not sure if this will lose less because it is a much weaker card or more since it has a much smaller memory pool and a fraction of the internal bandwidth.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/6.htm
The 2080ti isn't a 1080p gaming card, either. Not now or when it was released.

Edit: I should mention, the 1080ti, in their previous bandwidth comparison, showed no difference in any lane configuration down to 8x, and it even was more a +1440p card.
 
Last edited:
The reason I left AMD was very buggy drivers playing Half Life 2, I got a 6800 GT instead and never looked back. The only reason I got the 6900 XT is because the Nvidia cards were so damn difficult to get. Overall the AMD drivers are better but still can be buggy, also they are loaded with tons of crap i don't need.

Are you seriously saying we should judge them by the drivers they wrote for a game 20 years ago?
 
Are you seriously saying we should judge them by the drivers they wrote for a game 20 years ago?
Personally, I haven't bought an ATI AMD graphics card since the shit-show they made of the Windows 2000 drivers for their All-in-Wonder 128.

Hold a grudge? I have no idea what you mean...



Sent from my Ryzen 3700x
 
Are you seriously saying we should judge them by the drivers they wrote for a game 20 years ago?
I've tried them a few times since and they're always glutchy and buggy. They won't get my money back until they get a larger market share with less bugs and more killer app features like dlss and strong raytracing perf.
 
Whole bunch of confirmation bias in here. Driver issues from 20 years ago, "killer" features that existed in less than 50 games a year ago.

It's weird that reviewers don't talk about stability issues with AMD drivers. They're running dozens to hundreds of different tests between all the GPUs, games and resolutions.

If you think you'll find a problem, you're more likely to find one.
 
Are you seriously saying we should judge them by the drivers they wrote for a game 20 years ago?
How about the black screen issues that were only fixed less than a year ago? No? Okay, maybe the Halo Infinite issues? Not that either? Well then, how about the CG:GO issue that popped up again after the Halo Infinite fix dropped?

I could go on, but you see the point.
 
Some just suck more than others... ;).

If they're Chinese or something. AMD and Nvidia are even when it comes to issues. AMD because they try too much and can't quite deliver, and Nvidia because they don't care. Pick your poison.
 
The lack of video encode is perplexing, pcie gen4 4x may hit those budget users with older pcie gen3 only motherboards and be even worse for the Bulldozer era systems pcie gen2 ones still in use. Except those older systems CPU may become the limiting path over the pcie limited bandwidth. Have to see how the reviews go I guess. As for pcie gen4 motherboards this should not be an issue. This is one card that should actually hit the MSRP price and below, if not -> probably best not to buy.

Really if it beats solidly, lol, the 1060 and is available at MSRP and below, it could be a great seller, OEM product everywhere card and climb the Steam Hardware charts.

This has all the markings for taking a lot of market share sad to say due to the current terrible market for video cards. Unless Intel can come out in numbers and have a viable solution with Ark, another sad year for PC gaming hardware.
 
  • Like
Reactions: kac77
like this
It would still be 4 lanes of 3.0 for those using pci-e 3.0. That should be equivalent to 8 lanes at 2.0 for older motherboard users.

The much more powerful 2080ti lost 9% with that bandwidth. Not sure if this will lose less because it is a much weaker card or more since it has a much smaller memory pool and a fraction of the internal bandwidth.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/6.htm
Then I would upgrade my motherboard. Just like the world does for every single other component within a computer. Because the argument you're making here isn't that it doesn't work. It's that some performance is left on the table with older motherboards, which is what we all call reality of utilizing old hardware. Even Intel's latest motherboards half bandwidth for current gen budget boards and the vast response to that was, "well it's a budget board".

So it's perplexing as hell for actual computer builders to fall onto their fainting couch at the realization that holding onto older hardware leads to lower performance.
 
Last edited:
The lack of video encode is perplexing, pcie gen4 4x may hit those budget users with older pcie gen3 only motherboards and be even worse for the Bulldozer era systems pcie gen2 ones still in use. Except those older systems CPU may become the limiting path over the pcie limited bandwidth. Have to see how the reviews go I guess. As for pcie gen4 motherboards this should not be an issue. This is one card that should actually hit the MSRP price and below, if not -> probably best not to buy.

Really if it beats solidly, lol, the 1060 and is available at MSRP and below, it could be a great seller, OEM product everywhere card and climb the Steam Hardware charts.

This has all the markings for taking a lot of market share sad to say due to the current terrible market for video cards. Unless Intel can come out in numbers and have a viable solution with Ark, another sad year for PC gaming hardware.
I don't find the lack of encode puzzling at all. Who is using that at this performance bracket?
 
I don't find the lack of encode puzzling at all. Who is using that at this performance bracket?
True enough for when they are new and in a normal market I imagine (people do not game-stream or work with them, overkill for HTPC, etc...), but that the type of performance bracket where in a couple of years it can end up in a secondary computer and encode/decode.
 
Then I would upgrade my motherboard. Just like the world does for every single other component within a computer. Because the argument you're making here isn't that it doesn't work. It's that some performance is left on the table with older motherboards, which is what we all call reality of utilizing old hardware. Even Intel's latest motherboards half bandwidth for current gen budget boards and the vast response to that was, "well it's a budget board".

So it's perplexing as hell for actual computer builders to fall onto their fainting couch at the realization that holding onto older hardware leads to lower performance.

Most systems currently in use are likely pci-e 3. I fail to see why anyone should buy a new platform to use a 200 dollar gpu that'll be a bit faster than 1650s/1060/980/rx470/rx570 (maybe).

All of these mobo's are pci-e 3. I could list a bunch more, but these are the most recent ones. The z490 was released q2 2020.

x470, b450, z370, z490, x299


*****************************


Hopefully it doesn't look something like

 
Hopefully it doesn't look something like

This card would make an interesting hardocp review for pcie3 boards. What resolution & textures, is it playable & what is the cutoff point in each game where that 4gb threshold is breached etc
 
Most systems currently in use are likely pci-e 3. I fail to see why anyone should buy a new platform to use a 200 dollar gpu that'll be a bit faster than 1650s/1060/980/rx470/rx570 (maybe).

All of these mobo's are pci-e 3. I could list a bunch more, but these are the most recent ones. The z490 was released q2 2020.

x470, b450, z370, z490, x299


*****************************


Hopefully it doesn't look something like


Under this line of thinking then you should never buy any component that exceeds that of an older spec.

This logic makes no sense because in the budget space this happens routinely. Video cards not fully utilized because of slow CPUs, ssds not fully utilize because of PCI 3.0. but in no case does someone recommend that a part is bad because of the mismatch. That's what budget purchases usually look like to begin with.
 
Then I would upgrade my motherboard. Just like the world does for every single other component within a computer. Because the argument you're making here isn't that it doesn't work. It's that some performance is left on the table with older motherboards, which is what we all call reality of utilizing old hardware. Even Intel's latest motherboards half bandwidth for current gen budget boards and the vast response to that was, "well it's a budget board".

So it's perplexing as hell for actual computer builders to fall onto their fainting couch at the realization that holding onto older hardware leads to lower performance.
Unnecessary snarky response when having a slightly older motherboard never caused any real performance issues before. Fact is that most looking at this card will still be in pci-e 3.0 and they are not going to spending money to update their motherboard when they are already getting ripped off on budget GPUs.

Hardware unboxed results showed a devastating result on the older 4 GB RDNA cards when using x4 and I doubt the 6500xt will be any different. Having only 4GB means that they are much more likely to fetch from system ram via the pci-e lanes whereas the big Nvidia gards with 10GB+ of vram did not have this issue.
 
Unnecessary snarky response when having a slightly older motherboard never caused any real performance issues before. Fact is that most looking at this card will still be in pci-e 3.0 and they are not going to spending money to update their motherboard when they are already getting ripped off on budget GPUs.

Hardware unboxed results showed a devastating result on the older 4 GB RDNA cards when using x4 and I doubt the 6500xt will be any different. Having only 4GB means that they are much more likely to fetch from system ram via the pci-e lanes whereas the big Nvidia gards with 10GB+ of vram did not have this issue.
My point of view isn't that performance isn't left on the table. It's that the way people upgrade on the low end it's quite common.

So common I don't see how this is different from any other component.
 
Under this line of thinking then you should never buy any component that exceeds that of an older spec.

This logic makes no sense because in the budget space this happens routinely. Video cards not fully utilized because of slow CPUs, ssds not fully utilize because of PCI 3.0. but in no case does someone recommend that a part is bad because of the mismatch. That's what budget purchases usually look like to begin with.

Slower CPUs have to be REALLY old to actually hold back budget GPUs at PLAYABLE settings. Having an SSD run at pci-e 3.0 instead of 4.0 does not noticeably cripple a system as does these x4 GPUs so these are terrible examples. Would you be happy if your system ram like crap unless you updated your case and PSU everytime.

AMD cripple this card and made it unattractive to budget users for no reason and I really don't see any defending it.
 
Slower CPUs have to be REALLY old to actually hold back budget GPUs at PLAYABLE settings. Having an SSD run at pci-e 3.0 instead of 4.0 does not noticeably cripple a system as does these x4 GPUs so these are terrible examples. Would you be happy if your system ram like crap unless you updated your case and PSU everytime.

AMD cripple this card and made it unattractive to budget users for no reason and I really don't see any defending it.
There's a 15% difference often more just from going from Ryzen 3000 to 5000 series.
 
An AMD RX 550
If things go like they did in the video with the 5500XT 4GB, I would not consider the 6500XT a viable upgrade for at least $200. The 6500XT seems more like a $120 GPU. From what I know, it will likely perform worse than the 5500XT in PCIe 3.0 systems. We will need to wait for the reviews to know for sure. I will find it quite sad if the RX 570 beats it.
 
Some of these results...
Screenshot_20220117-202929-502.png
Screenshot_20220117-113655-520~2.png
Screenshot_20220117-202308-485.png
Screenshot_20220117-113605-429.png
Screenshot_20220117-113328-561.png

Alot of these games are doing good with just 4GB vram so long as they have sufficient bandwidth to the main board so as to borrow from the system memory. Take that away and the 4GB AMD cards take a devastating hit in many cases.
 
Hopefully it doesn't look something like


I think HWUB could have done better here.
They could have used a 6600XT and limit CU count and VRAM on the driver level. That would still not have limited VRAM bandwidtch and Infinity Cache but it would have used RDNA2 architecture.
 
well if miners don't want em scalpers will gladly take whatever they can get by the pallet then sell em off 3x-4x msrp. sad we can negate one but get hung up on the other sucking supply lol.
 
I believe those are system OEM only. That said, it's probably the way to go as it is much harder to scalp an entire system.
If you are ready/do not mind to buy a complete system, you can probably have the GPU you want:
https://www.hp.com/us-en/shop/slp/omen-gaming/desktops

It is only "floating" alone GPU that are rare and hard to buy from my understanding.

10700K with a 3090 RTX is $2900 currently on hp.com, a 5800x with a 3070 is $2300
 
Back
Top