How quickly does Windows get corrupted when OCing?

Colonel_Panic

Limp Gawd
Joined
Oct 5, 2009
Messages
328
I work for a company developing some new waterblocks and we're now in the testing phase. As a result of this, I've been spending a lot of time overclocking our 4790k and through repeated BSODs and resets, it looks like Windows is getting corrupted. We're pushing for max speed, rather than long term safety, so we're expecting some issues here and there but I'm very curious to see how much overclocking and resetting it takes to corrupt a Windows install. Just today we've hit some graphics issues we'll be swapping the motherboard but there were also some BCD errors.

Anyone else develop software issues while pushing the limit?
 
Disable fast boot and you will be fine for a long time, corruptions are due the way fast boot works keeping the machine in a hybrid-shutdown state saving crucial system information in the RAM and processed by the GPU at startup. disabling Windows fast boot will help to avoid OS corruptions, that's happening since windows 8.
 
Depends on what version. The later you get the more resilient I've seen. I was testing OCs and repeatedly was resetting my machine while it was booting into Win10.

After about 10 or so hard resets, either inside windows or during the boot process, it never got corrupted.

I doubt a Win7 system would've survived. Even if it did boot, I'm sure WMI would have been corrupted to hell.
 
I would just make an image of the system and then when it gets corrupted you can just restore the image.

As far as how long it will take before something gets corrupted, it depends a lot on the drive and the drive controller as well as what the computer was doing when it locked up/reset/whatever.

The only time an OS should get corrupted is if it is writing to the drive when the crash happens.

A few times I have found that the swap file has gotten corrupted and a simple deleting of it after hooking the drive up to another computer completely fixed it.

The hibernation file can also get corrupted. I always disable it on all my machines because it just wastes space on the drive equal to the amount of RAM installed on the system.

Absolutely no need for hibernation on a system that is using an SSD for the boot drive.
 
Disable fast boot and you will be fine for a long time, corruptions are due the way fast boot works keeping the machine in a hybrid-shutdown state saving crucial system information in the RAM and processed by the GPU at startup. disabling Windows fast boot will help to avoid OS corruptions, that's happening since windows 8.

Thanks for that. I've managed to reset to new BIOS so with fast boot disabled I'll try it again.
 
I work for a company developing some new waterblocks and we're now in the testing phase. As a result of this, I've been spending a lot of time overclocking our 4790k and through repeated BSODs and resets, it looks like Windows is getting corrupted. We're pushing for max speed, rather than long term safety, so we're expecting some issues here and there but I'm very curious to see how much overclocking and resetting it takes to corrupt a Windows install. Just today we've hit some graphics issues we'll be swapping the motherboard but there were also some BCD errors.

Anyone else develop software issues while pushing the limit?

If your OC is unstable then, corruption galore

Disable fast boot and you will be fine for a long time, corruptions are due the way fast boot works keeping the machine in a hybrid-shutdown state saving crucial system information in the RAM and processed by the GPU at startup. disabling Windows fast boot will help to avoid OS corruptions, that's happening since windows 8.

Cool, will disable fast boot

I would just make an image of the system and then when it gets corrupted you can just restore the image.

As far as how long it will take before something gets corrupted, it depends a lot on the drive and the drive controller as well as what the computer was doing when it locked up/reset/whatever.

The only time an OS should get corrupted is if it is writing to the drive when the crash happens.

A few times I have found that the swap file has gotten corrupted and a simple deleting of it after hooking the drive up to another computer completely fixed it.

The hibernation file can also get corrupted. I always disable it on all my machines because it just wastes space on the drive equal to the amount of RAM installed on the system.

Absolutely no need for hibernation on a system that is using an SSD for the boot drive.

What if it doesn't post ?
 
I never really connected overclocking and os corruption. I can see how its a possibility, but that risk is ultimately as low as can be i would think with win8/win10
 
I've been overclocking for well over 20 years and have suffered an inordinate amount of hard drive corruption.
This stopped abruptly when I started using SSDs, my first one was the Vertex 2.

I overclock and tweak my system a lot because I am always trying something new, my kind of fun.
There is a CM Xtreme IV cooler in the post for my 980ti lol.
My CPU is custom water cooled.
I still get a fair amount of crashes while arsing with clocks and voltage but I stopped worrying about them because I dont see any problems now.

Since using an SSD I have not had to run scandisk on the OS drive even once.
I'll rephrase, that is never!
I have had to run scandisk only twice on another drive in the same system, thats in more than 5 years.
A more than stark difference.


My 2 main SSDs have been the Vertex 2 and 840 Pro.
I had a few months with an OCz Vector which died after a crash, but I put that down to a crap drive that probably caused the crash, given my previous experience of no problems.
OCz went into administration shortly after so I got off lightly!

I havent heard of other people suffering corruption issues while using an SSD for the OS either but that isnt proof it doesnt happen.
I can only make the claim that if you use the 840 Pro at least, you wont have to worry about OS corruption.
 
I've been overclocking for well over 20 years and have suffered an inordinate amount of hard drive corruption.
This stopped abruptly when I started using SSDs, my first one was the Vertex 2.

I overclock and tweak my system a lot because I am always trying something new, my kind of fun.
There is a CM Xtreme IV cooler in the post for my 980ti lol.
My CPU is custom water cooled.
I still get a fair amount of crashes while arsing with clocks and voltage but I stopped worrying about them because I dont see any problems now.

Since using an SSD I have not had to run scandisk on the OS drive even once.
I'll rephrase, that is never!
I have had to run scandisk only twice on another drive in the same system, thats in more than 5 years.
A more than stark difference.


My 2 main SSDs have been the Vertex 2 and 840 Pro.
I had a few months with an OCz Vector which died after a crash, but I put that down to a crap drive that probably caused the crash, given my previous experience of no problems.
OCz went into administration shortly after so I got off lightly!

I havent heard of other people suffering corruption issues while using an SSD for the OS either but that isnt proof it doesnt happen.
I can only make the claim that if you use the 840 Pro at least, you wont have to worry about OS corruption.


Ahhhh So basically, what I am gathering from this is that the abrupt lack of OS responding or rapid restarts without "proper shutdown" is the cause of the OS corruption on Spinners, vs SSD's which don't access/write the same way.
 
For sure if the corruptions were caused by power failure but overclocking crashes cause corruption with the OS on a hard drive, despite the drives still being powered.
Perhaps the difference is because of the speed tasks can be completed.
ie once the OS has crashed, it may have a knock on effect of causing the hard drive to freeze before it can complete all tasks whereas an SSD can easily complete all tasks in time.
I cant draw conclusions on why at this point, there isnt enough information.


It is worth noting that my PC is on an UPS and has been the whole time I have had SSDs.
I dont encounter power failures on my PC so I dont know if a mains failure can corrupt my SSD.
I only know that overclocking crashes cause me no problems.
 
By sheer coincidence I just had my first ever Scandisk of my SSD!

I was testing memory overclocks on my gfx card as its been given a new cooler.
The screen corrupted during testing and windows locked up.
On reboot windows scanned both the partitions of my SSD and found a few orphaned files on the Windows drive.
Everything seems ok after.

Video memory crashes are by far the worst because they always cause a complete machine lockup.
Whereas GPU overclock crashes usually cause the video driver to crash and recover, Windows continues ok.
So I guess you can still have issues with an SSD, much less often though.
 
Overclocking could randomly corrupt any file written during the overclock or even the filesystem structure because bits may not be correct (result of a calculation failure, corrupted memory ...) when written. I say there is no way to know before overclocking how long it will take for corruption to happen.
 
Once a blue screen + reboot caused my web browser's bookmarks (in XML) to somehow become embedded in my IM's chat history. The mangled links became entries of my chat history.
Two completely separate pieces of software,different vendors... Hard to believe but it happened.
That was back on XP. I think it may have been the defragmenter service commiting a write.
Computer was unstable due to a bad power supply, but a sudden reboot because of an overclock can probably do the same.
 
I work for a company developing some new waterblocks and we're now in the testing phase. As a result of this, I've been spending a lot of time overclocking our 4790k and through repeated BSODs and resets, it looks like Windows is getting corrupted. We're pushing for max speed, rather than long term safety, so we're expecting some issues here and there but I'm very curious to see how much overclocking and resetting it takes to corrupt a Windows install. Just today we've hit some graphics issues we'll be swapping the motherboard but there were also some BCD errors.

Anyone else develop software issues while pushing the limit?

My current build has a Windows "test" installation, on separate SSD, just for OC validation purposes to avoid possible corruption of the "prod" installation. I disable the "prod" drive in the BIOS before booting from the "test" drive.

After several BSODs and freezes, the only thing I detected is that the task scheduler stopped working and the reason that was noticed was because one of the hardware monitors had an entry in the task scheduler library for it to start with Windows. Mind you, this is a minimal Windows install with just the OS and software for stress testing and monitoring.

Any old SSD and valid Windows 7 key should be good for 120 days of testing without activation.
 
Last edited:
Back
Top