Power Outage at Samsung Destroys 3.5% of Global NAND Flash Output For March

rgMekanic

[H]ard|News
Joined
May 13, 2013
Messages
6,943
A power outage at a Samsung fab near Pyeongtaek, South Korea is being reported by AnandTech. The half hour power outage reportedly damaged 50,000 to 60,000 wafers of V-NAND memory, which represent 11% of Samsung's monthly output, and 3.5% of the global NAND output.

Ugh, if the enthusiast PC market didn't need yet another slap across its face right now. We will have to see what happens in the market, or else we may need to add NAND storage to the ever growing list of things that are running at inflated pricing. I wonder if the power outage was caused by miners using up all the electricity? Big thank you to WhoBeDaPlaya for the story.

Samsung itself has already produced volumes of its latest Galaxy S9/S9+ smartphones it needed to support channel sales in the coming months, therefore it is not going to require massive amounts of NAND memory in the coming weeks. Meanwhile, other major consumers of NAND will start to build up inventory of memory only later this year when they start to prep for product launches in August or September.
 
I just checked, average semiconductor plants use anywhere from 50-100MW. Given this was one of the largest in the world, safe to say 100MW (roughly 16k US homes, on average power use).

Probably outside the realm of realistic "UPS," and more into "your own power station."
 
I just checked, average semiconductor plants use anywhere from 50-100MW. Given this was one of the largest in the world, safe to say 100MW (roughly 16k US homes, on average power use).

Probably outside the realm of realistic "UPS," and more into "your own power station."

Yep.

Hospitals have their own power plants.
Large industrial complexes often do too.

As well as fault tolerance, it would probably be cheaper to make their own power than to buy it.

.
 
How the fuck do you not have a backup power system for an operation like that? That must have cost Samsung tens of millions in lost product for one 30 minute power outage. The stupid, it hurts.

Probably because it can get real expensive on that scale. You basically have a power plant at that point. Normal commercial buildings may have one or two big generators for backup, but that is only enough to run emergency lighting, servers, and other important equipment not the whole building. Something like a hospital or vegas casino will have full premises backup, but that is massive, taking up a building of space in its own right. As big as that is though, that wouldn't work on this scale because that is enough power to cover normal retail/commercial use. Something like a fab needs much more power output, and thus an even larger generation station. Now I'm not saying that is impossible, but costly to build, and to maintain. It won't be a "Set it and forget it,' thing you'll have full time people minding after it. Ok well depending on the cost and how reliable your grid is, maybe you decide it isn't worth it. It is all a risk calculation: How often does the grid in your area go down, when it does how much damage does that generally cause to your operations vs how much does this cost to build and maintain?

Remember that in manufacturing, and indeed in all business loss WILL happen. Shit will go wrong. Risk is a simple part of the equation. You evaluate it, and decide if you want to mitigate it (like build backup power), transfer it (buy insurance) or just accept it. Accept it is something you often have to do, as the other options are too expensive or just plain impossible in some cases.

That aside, says who they didn't have backup power? Maybe they did, and it failed too. That happens. For real important shit you have one (or more) backups to your backup. But again, big size, big cost, how feasible would it be to have multiple backup power stations?
 
Photo of the perpetrator...

QssyPXo.jpg
 
A UPS is only setup to carrier the load until the generators starts. My company makes UPSs that can handle that load. It's like said above, where the batteries in working condition (UPS batteries only last 3-5 years), was their transfer switch, UPS and Generators tested on a annual basis? Lots of questions on this one.

Just like network security, their backup battery plan/system is always on the bottom of the list of things to spend money on. I bet it is on the top of the list now.
 
Maybe they noticed how long it took for HD prices to return to the levels that existed before the great Thailand flood disrupted supply chains. Long after HD production returned to normal and even after folks started buying SSD instead of spinning HD, prices stayed up. Add in the inflated price levels because of the GPU 'shortage' and you have an incentive to create or allow to happen a NAND supply shortage. Prices go up, folks get used to those prices levels, Profit! Just a wild eyed tin foil hat hypothesis.
 
I just checked, average semiconductor plants use anywhere from 50-100MW. Given this was one of the largest in the world, safe to say 100MW (roughly 16k US homes, on average power use).

Probably outside the realm of realistic "UPS," and more into "your own power station."

Yep, all the fabs I've ever been to have their own sub-station with multiple independent geographically separate power lines coming in. You might be able to UPS some things but that much power usage is pretty hard to cover. Gensets alone would be ~100 million.

It is a lot easier to do in a data center where you can have the UPS nicely distributed at the rack.
 
A UPS is only setup to carrier the load until the generators starts. My company makes UPSs that can handle that load. It's like said above, where the batteries in working condition (UPS batteries only last 3-5 years), was their transfer switch, UPS and Generators tested on a annual basis? Lots of questions on this one.

Just like network security, their backup battery plan/system is always on the bottom of the list of things to spend money on. I bet it is on the top of the list now.

Yeah but at their scale, you are basically talking about a small scale power plant. Most fabs don't have UPS power backup beyond brownout/instantaneous hiccup. That fab would probably require ~4 LM2500+ gensets to supply 30 minutes of backup power. For standard super high output diesels, you would need between 30-50 gensets, going to the top end Cat gensets, you would need 14-18. And all of those would require between 60-600s of battery standby to cover the transition. Basically, you are looking at 100+ million cost minimum. Considering that fabs always have multiple path tier 1 power contracts (aka the last to be dropped outside of designated emergency Tier 0) and historically unplanned power interruptions are basically non-existent for them, it works out to a pretty costly luxury.

They maybe lost 100 million in wafers at worst. They still come out ahead.
 
I can guarantee the outage was caused by K-pop.


kpop.jpg


That's right - .3% of the world's annual production of V-NAND is destroyed because some fattie on the night shift wanted Korean microwave popcorn. That's what is usually happening in my house when the circuit breaker goes.

I bet the poor bastard is in the corner of his boss's office right now, with his hands in the air holding the offending microwave over his head. (You'd have to understand Korean punishments to get that one, though.) His boss is probably ordering a mandated diet of cabbage before reporting to his boss, where he will explain everything and then be executed under a portrait of Lee Byung-chul.


P.S. But hey, that's life at the top of the V-NAND electronics business! Everybody sing! "I'm so ronery... So very ronery... So ronery and sadry arooone...."

P.P.S. I know that the above is shallowly racist and politically insensitive, but ... It always makes me laugh, I can't help it, everytime someone screws up and everyone gets quietly serious my friend will start singing that under his breath, and I can't help myself, I start giggling.
 
Last edited:
It does not appear to have affected pricing at this time, spot prices have been trending down since the event. NAND was expected to be in short supply second half of this year, however, so this might slightly move that timeline up.
 
Samsung used to make solar panels. Still, get some more backup power. With the amount of money they bring in, it's totally inexcusable that world has to be impacted by something so small and isolated.
 
THIS. I am inclined to believe these mishaps are not accidental, given the historic trends.
Because changing 3.5% of the global nand output for 1 one month is going to be YUGE. (don't let the other 96.5% find out, or they will also want a slice of that pie)

giphy.gif
 
Because changing 3.5% of the global nand output for 1 one month is going to be YUGE. (don't let the other 96.5% find out, or they will also want a slice of that pie)

View attachment 59523

11% of the monthly output for ONE company the size of Samsung is devastating.

You expect them to just take the loss on this when investors are watching their portfolios with magnifying glasses?

Take a wild guess what they must do to keep their bottom line in check...
 
A bird farts in the wind in Africa and somehow RAM and NAND production is affected.
This, it might go up. Who knows, but the price can change between Samsung and it sitting on the shelf. Anybody in between can raise it a bit.
 
The Ideal way to set this up is

Have multiple power generation plants (City wide) provide power to the building via different paths to said building (as mentioned above)
Have Multiple transfer switches to switch between these power generation plants in case one fails (someone could accidentally dig up the underground power source from Generation Plant B just outside the building)
Each Power Gen feed has its own UPS and Generators
Divide the plant up among each power gen plant

Most big data centers are setup this way so they never loose power unless there is a region wide disruption like a natural disaster. But usually you know that in advance and can shut down your servers/machines in time

We take this to the rack, each rack is fed from different power gen plants and each rack has its own UPS/transfer switch. UPS is only used until the generators can kick in, level off and become a stable power sources.
 
Yeah but at their scale, you are basically talking about a small scale power plant. Most fabs don't have UPS power backup beyond brownout/instantaneous hiccup. That fab would probably require ~4 LM2500+ gensets to supply 30 minutes of backup power. For standard super high output diesels, you would need between 30-50 gensets, going to the top end Cat gensets, you would need 14-18. And all of those would require between 60-600s of battery standby to cover the transition. Basically, you are looking at 100+ million cost minimum. Considering that fabs always have multiple path tier 1 power contracts (aka the last to be dropped outside of designated emergency Tier 0) and historically unplanned power interruptions are basically non-existent for them, it works out to a pretty costly luxury.

And quite unlike datacenters, you can't bounce to battery/reserve and rapidly cycle down the power demand of a fab to preserve the in-flight processes. So unless they had reserve power/etc to allow for a full fab shutdown during the power outage that wafer loss is pretty inevitable. The power demand is constant and very inflexible.
 
You guys actually believe this was an 'accident' ? ... hahahah ... yeahhhhhh - riiiiiiiiiiight

Simple math can tell anyone they just made hundreds of millions of dollars more because now demand will even be greater than before so they can raise prices even higher. Brilliant move on Samsung's part.

In other news, all 25 Microcenter locations have moved their memory to locked secure display cases and or cabinets because prices are expected to rise even further on memory. And no, I'm not kidding.

I just saw 8GB of DDR4 for $149 dollars, a few different sets.
 
Not sure what the problem is. NAND memory isn't in short supply as the demand for it isn't that high. This is literally a non-issue for consumers.
 
Samsung has already made so much money from the "outage" that they have decided to turn the power off for the next 2 months.You think they would turn the power off for a longer period of time but they decided not to get too greedy. :wacky:
 
The Ideal way to set this up is

Have multiple power generation plants (City wide) provide power to the building via different paths to said building (as mentioned above)
Have Multiple transfer switches to switch between these power generation plants in case one fails (someone could accidentally dig up the underground power source from Generation Plant B just outside the building)
Each Power Gen feed has its own UPS and Generators
Divide the plant up among each power gen plant

Most big data centers are setup this way so they never loose power unless there is a region wide disruption like a natural disaster. But usually you know that in advance and can shut down your servers/machines in time

We take this to the rack, each rack is fed from different power gen plants and each rack has its own UPS/transfer switch. UPS is only used until the generators can kick in, level off and become a stable power sources.

The problem is that until very very recently simplywasn't even close to a feasible way to do it short of building your own peaker power plant. They would basically require the equivalent of the Tesla standby battery plant that was just installed in Australia plus an array of gensets that can handle ~100MW continuous (~4 LM2500+ gensets at 10s of millions each).

In contrast, datacenters can generally get by with distributed at rack UPS plus a couple cheap MW diesel gensets.
 
This is what you get for not changing the UPS batteries when they keep beeping. :oops:

This statement hits far too close to home. I've done so much contract It work and I've lost count of how many places I've walked in where the UPS's were all beeping and someone mentioned they had been doing that for years completely oblivious to the implications.
 
Back
Top