Amazon’s New World game is bricking GeForce RTX 3090 graphics cards

StryderxX

[H]ard|Gawd
Joined
Jun 22, 2006
Messages
1,725
Gamers with access to Amazon’s New World are reporting that their cards are dying after just 15 to 30 minutes of playing the game, Windows Central reports.

The issue appears to affect mainly GeForce RTX 3090 graphics cards which are reportedly overheating and see power spikes. The game has an uncapped framerate in the main menus, which is usually associated with buzzing capacitors. Most users however have reported that EVGA RTX 3090 cards specifically are the most affected brand. A number of the RTX 3090 cards have been bricked in the process.

https://videocardz.com/newz/amazons-new-world-game-is-bricking-geforce-rtx-3090-graphics-cards
 
I limited FPS to 60 in-game… not like it matters because even my 6900 XT can’t seem to push above 45 after changing to low settings…
 
This prompted me to spend an additional $30 to upgrade my FTW3's warranty to 5 years...never kept a card that long but then again never bought a 2k vid card either. Mine is a newer batch with no red lips and improved power components but didn't want to be stuck in a bad place in case the 4xxx /5xxx launch is a fiasco too and I do find myself using the card in 2025 (that VRAM amount ensures the 3090 will stay in the fight for a long time as long as the rasterization/shading/RT perf can keep up).
 
This was addressed yesterday and they applied a patch for force all menus to be capped, even if your frame rate is not.
 
Jayz said he is receiving reports that some 6900XTs are experiencing failures as well.
 
It seemed like mostly Evga 3090ftw3 models were being reported. Which imo points to a weakness in the card design that was just waiting for the right conditions to fail.

Either way it’s old news now. And it really didn’t have anything to do with the game itself. There are many games that are not menu fps capped that work just fine. This caught flack because it was in beta.
 
Looks like a major design flow in thermal design on tge cards of they cant handle to be used.
Granted reports were saying the cards in question were hitting 9k FPS. But still, when you only had a very specific card fail multiple times? Everyone was praising evga for immediately accepting RMA’s. Umm that not “good “ customer service that is what is expected when you made a design error and don’t want it to blow up in your face.
I like evga. Bought from them many times, but I put this on them not on any software.
 
My first EVGA 3090 died a few months ago and a quick search through their forums shows the same issue happening for a lot of people...usually attributed to Halo 2 of all things. Mine went kaput playing GTA5. There are some people on RMA #3 or #4. I've been fortunate that my replacement has been fine, but I'm still paranoid as hell about it. I'm not about to play this game as a result.
 
The game mines cryptocurrency when sitting at the menu screens.

The next rocket still needs a circumcision.
 
If you're playing a new game and it's killing your graphics card then it isn't the games fault. If the card is overheating then that means it has a shit cooler design. I doubt it's a RTX 3090 specific issue but an issue with cooler designs by card manufacturers. I'd like to know which cards that died are using a reference design and which aren't. From my experience the reference design is always better.
 
Sounds like the 3090s where a real stable design.
Unless someone at Amazon has a real case of GPU envy.... 3090 detected, engage overvolt. lol

I don't get how running the cards at 100% is causing fails though unless this design is really just complete shit. Still it would seem prudent for Nvidia, AMD and Intel to just add hard FPS caps into their driver. Cap them at 300 FPS or something.... just add an option to disable it in the menus with the overvolt type options with warnings, about some games without caps can cause serious overheating issues. I mean why do we need FPS higher then any monitor on the market can refresh anyway.
 
If you're playing a new game and it's killing your graphics card then it isn't the games fault. If the card is overheating then that means it has a shit cooler design. I doubt it's a RTX 3090 specific issue but an issue with cooler designs by card manufacturers. I'd like to know which cards that died are using a reference design and which aren't. From my experience the reference design is always better.
Wasn't it the case with the 3080/90s that the initial offering of cards it was the reverse. The reference was inadequate... or was it that the reference spec was buffed a few weeks before launch. I can't remember now... but I seem to remember this one being nvidias fault as some of those first batch of cards had issues out of the gate. Seems to me some of them may have been just skating bye on the edge of stability anyway... and now all it takes is a menu with an uncapped FPS to expose the bad voltage delivery on NVs reference (or at least what some MFGs assumed was reference)
 
Wasn't it the case with the 3080/90s that the initial offering of cards it was the reverse. The reference was inadequate... or was it that the reference spec was buffed a few weeks before launch. I can't remember now... but I seem to remember this one being nvidias fault as some of those first batch of cards had issues out of the gate. Seems to me some of them may have been just skating bye on the edge of stability anyway... and now all it takes is a menu with an uncapped FPS to expose the bad voltage delivery on NVs reference (or at least what some MFGs assumed was reference)
I remember issues with RTX 3080's crashing due to board partners putting in shit quality capacitors but that was never 100% confirmed. Maybe this is that issue coming back to haunt Nvidia users? This is why I'd like to know if it's reference cards or not that's being effected by this. This is similar to the issue with the GTX 970 where you had 4GB of VRAM but realistically only 3.5GB was usable. How much of the super expensive RTX 3090 are you able to use before blowing it up? I smell a class action lawsuit coming.

 
  • Like
Reactions: ChadD
like this
I remember issues with RTX 3080's crashing due to board partners putting in shit quality capacitors but that was never 100% confirmed. Maybe this is that issue coming back to haunt Nvidia users? This is why I'd like to know if it's reference cards or not that's being effected by this. This is similar to the issue with the GTX 970 where you had 4GB of VRAM but realistically only 3.5GB was usable. How much of the super expensive RTX 3090 are you able to use before blowing it up? I smell a class action lawsuit coming.


Reference cards have not been the widely reported cards. In fact other than a Reddit post I’ve seen no mention of reference cards. I don’t think it has anything to do with the reference design or hsf. It’s a glaring vulnerability in a card that retails for hundreds more than the reference design ironically.
 
The capacitor thing was nothing more than complete nonsense. The guy (Igor) who came up with the idea didn't have a GPU to stick up to a scope to see whether those were the issue or not. More solid evidence from OC'ing potential of these cards shows that manufacturers pushed them to the limits to squeeze as much performance out of last gen as possible. They later on addressed things with BIOS updates that limited the cards.

There are a lot of things to fail on a GPU before the load of caps on the back of your card's core. Such as the VRMs, where the cards get a lot more load on.
 
Sounds like Nvidia needs a Radeon chill feature. Perhaps AMD could be kind and opensource that for them as well. ;) /jk

Really though my default setting is Radeon chill set to max my refresh rate and then 8 FPS under my max for a min. It means menus and other things my card cooler doesn't even turn on. All the shit AMD takes over drivers.... I love them. Chill is my favorite feature on any GPU driver ever. In less demanding games it keeps everything buttery smooth and most of the time the fans if they even turn on never leave their lowest setting. It would seem to me with a 3090 there shouldn't be much need to crank everything full open almost ever.
 
https://forums.newworld.com/t/known-issue-nvidia-rtx-3090-series-100-gpu-usage/126068/28
It's interesting that most users seem angered at amazon. I think I would be more angry at Nvidia/AMD/EVGA/Gigabyte/ whoever that I don't even have enough control of my hardware to keep it from bricking from some arbitrary software load. Ofcourse poorly designed software is nothing to be happy about either, but my card BRICKING? I would place the blame of that on the hardware.

edit: on the other hand that is the newworld thread, so I guess it would have more complaints at Amazon. Maybe I should find the EVGA thread.
 
https://forums.newworld.com/t/known-issue-nvidia-rtx-3090-series-100-gpu-usage/126068/28
It's interesting that most users seem angered at amazon. I think I would be more angry at Nvidia/AMD/EVGA/Gigabyte/ whoever that I don't even have enough control of my hardware to keep it from bricking from some arbitrary software load. Ofcourse poorly designed software is nothing to be happy about either, but my card BRICKING? I would place the blame of that on the hardware.

edit: on the other hand that is the newworld thread, so I guess it would have more complaints at Amazon. Maybe I should find the EVGA thread.
I would agree but it is Amazon's game that is doing it. If it was happening with other games then I could understand. There are reports coming in that it is happening across generations of AMD and Nvidia cards. I don't know how a game could do this but it is doing something and Amazon is at fault.
 
It’s not Amazon’s fault there is a flaw with the firmware/engineering defect of a specific 3090 card doing this. If it wasn’t Amazon, it could have been Total Annihilation from Cavedog Studios or any game that runs uncapped with nothing going on. This is on EVGA for putting out a faulty product.

That being said, this is why I run a framecap in NVCP by default. Mostly to save power though.
 
The 3090 was a last minute add to the 3000 lineup. They were concerned that big navi was going to tie the 3080 and Nvidia would lose the "undisputed fastest card" title.

Its possible that something slipped through the cracks during the design phase.

If it really was some crazy programming on Amazon's part, you would think the cheaper, lower end cards would be the first to blow.
 
The 3090 was a last minute add to the 3000 lineup. They were concerned that big navi was going to tie the 3080 and Nvidia would lose the "undisputed fastest card" title.

Its possible that something slipped through the cracks during the design phase.

If it really was some crazy programming on Amazon's part, you would think the cheaper, lower end cards would be the first to blow.
This isn't happening on the reference design, from what can be gathered. It's an issue on other boards like EVGA, that have the 3 8-pin versus the reference design of only two (Technically only one on the FE board). These designs seem to have a different power regulation chip as well, and who knows what else is likely different.
 
The 3090 was a last minute add to the 3000 lineup. They were concerned that big navi was going to tie the 3080 and Nvidia would lose the "undisputed fastest card" title.

Its possible that something slipped through the cracks during the design phase.

If it really was some crazy programming on Amazon's part, you would think the cheaper, lower end cards would be the first to blow.
Higher end cards tend to be putting a lot more voltage into their chips and ram to run at higher frequencies.

It would be like taking 2 CPUs... clocking one 200mhz lower then its normal clock and then other 200mhz over. Which one is going to heat up the most ? If their is a voltage fault in the MOBO they are slotted into.... the one drawing more power is going to find the flaw first.

This seems to be something to do with voltage regulation or some such thing that is causing some 3090s to fry parts.
 
I would agree but it is Amazon's game that is doing it. If it was happening with other games then I could understand. There are reports coming in that it is happening across generations of AMD and Nvidia cards. I don't know how a game could do this but it is doing something and Amazon is at fault.
It's still too early to say since we dont know what the issue is yet, so I wont argue it too much more. But at the end of the day I expect my hardware to run arbitrary software loads. If Amazon happened to create a program that showcases an issue I would treat them as discoverers of a GPU problem (again, lets wait for more info). I guess my opinion on this is not new.

see:
1) is 24 hour prime95 stress test necessary ?
2) power viruses (is it hardware's responsibility to keep from breaking?)
3) is furmark realistic?
 
If it was happening with other games then I could understand.
ummm:
My first EVGA 3090 died a few months ago and a quick search through their forums shows the same issue happening for a lot of people...usually attributed to Halo 2 of all things. Mine went kaput playing GTA5.


sounds to me like these highest end cards, or something on them, arent handling their sustained oc at 100% load with unlocked fps. lower the power a bit or limit the fps via driver or rivatuner will probably prevent it.
 
3) is furmark realistic?

This was answered over 10 years ago, the answer is no. It oversaturates the VRM and cooks it in certain cases. This is why I stopped using it like... a decade ago.

Just limited my frames to 240 :D

I don't think this should be needed to know that you should limit your FPS in the first place. After all, framesync does this as well. Getting more FPS than what your panel can display, does nothing more than putting excessive load on your hardware.
 
Sounds like the 3090s where a real stable design.
Unless someone at Amazon has a real case of GPU envy.... 3090 detected, engage overvolt. lol

I don't get how running the cards at 100% is causing fails though unless this design is really just complete shit. Still it would seem prudent for Nvidia, AMD and Intel to just add hard FPS caps into their driver. Cap them at 300 FPS or something.... just add an option to disable it in the menus with the overvolt type options with warnings, about some games without caps can cause serious overheating issues. I mean why do we need FPS higher then any monitor on the market can refresh anyway.
LG just announced a 480 Hz screen and there's some FPS players that like their framerate as high as possible even if it it's above their refresh rate.

It would be an okay solution as long as it could be disabled and adjusted like you suggested but it would still just be covering for a hardware flaw of some sort which would encourage sloppy/insufficient designs.
 
LG just announced a 480 Hz screen and there's some FPS players that like their framerate as high as possible even if it it's above their refresh rate.

It would be an okay solution as long as it could be disabled and adjusted like you suggested but it would still just be covering for a hardware flaw of some sort which would encourage sloppy/insufficient designs.
This was happening at an unrealistic FPS. We're talking thousands of FPS. What happened is as soon as the game went to the uncapped framerate menu, the FPS would shoot up in the thousands instantly, this put instant very high load on the card, and is frying them. As long as you cap your framerate to something within the realm of reason, it's not going to be an issue. But really, this seems to be a design flaw of certain AIB cards.
 
there's some FPS players that like their framerate as high as possible even if it it's above their refresh rate.

Yeah, Counter-Strike players who believe that will lower their frametimes so much that it'll give them a significant advantage. These people will believe in any kind of placebo, did they actually do tests with professional grade hardware to see how much of an advantage that gives them?
 
It’s not Amazon’s fault there is a flaw with the firmware/engineering defect of a specific 3090 card doing this. If it wasn’t Amazon, it could have been Total Annihilation from Cavedog Studios or any game that runs uncapped with nothing going on. This is on EVGA for putting out a faulty product.

That being said, this is why I run a framecap in NVCP by default. Mostly to save power though.

This isn't affecting just one version of the 3090. It's affecting 3080Ti, 3090, and from what I just read in this thread, it seems to be affecting 6900XTs from AMD as well.

This "problem" initially started in the Alpha release of the game, was reported, and left in even after Beta. That makes amazon part of the problem. The greatest part of this whole fiasco is that Amazon has already sent out (totally unrelated thi this issue) thousands of emails to users telling them that if they have a problem with Amazon services/products they can no longer use arbitration to resolve it, they need to use the courts.

So when those affected by this launch a class action, like so many love to do, it cannot be dismissed with the old argument, "But but but...They agreed to use arbitration", lol.
 
Yeah, Counter-Strike players who believe that will lower their frametimes so much that it'll give them a significant advantage. These people will believe in any kind of placebo, did they actually do tests with professional grade hardware to see how much of an advantage that gives them?
Didn't really want to get into that debate because it's not something I care about but some people do think it helps and if they want to run higher framerates than their monitor supports they should be able to.
 
Didn't really want to get into that debate because it's not something I care about but some people do think it helps and if they want to run higher framerates than their monitor supports they should be able to.

It does help, but to what extend is very questionable. Theoretically, lower frametime should offer better experience. But, it makes such a low difference that dials right into the placebo range. I've never seen actual tests beyond some silly tube videos from CS regarding this - and there's probably a reason for that.
 
LMAO at the people that want a class action against Amazon for exposing their shit GPU.

It's like taking your 1990 Chevy Astro to a racetrack and blaming the track when your rusted axle breaks in half.
A more apt comparison would be the track installing software and your perfectly fine car's gas pedal would be stuck to the floor the entire time, and your engine redlines and blows. Comparing a 3090 to bringing a pos car to a racetrack is pretty poor anyway.
 
LMAO at the people that want a class action against Amazon for exposing their shit GPU.

It's like taking your 1990 Chevy Astro to a racetrack and blaming the track when your rusted axle breaks in half.

It's known that putting some excessive, unrealistic loads on a GPU can have diminishing returns. My card for example makes a low pitched whine when I get a ton of FPS.

It's not like having a rusted axle of an old van break on a track. But to iterate your point more properly, even newer vehicles can break themselves down the track. That's a rough, different kind of usage which pushes the vehicle beyond one's aggressive street driving.
 
Back
Top