SENTRY: Console-sized gaming PC case project

CABLE MANAGEMENT INFO


I want to talk about how the cable management works in our case and know what you think about this.

Current build of prototype IV looks like this:



There's 10.5"/266mm GTX 970, 2.5" SSD in main slot, 2.5" 9.5mm HDD under the VGA's turbine and chieftec SFX-L.

You might wonder where did all those 450mm cables went - they are actually all in there tucked between the gpu cooler and the central wall (this tiny 25mm gap hidden by central drive mount) and it kind of works to not have the mess of cables hanging around in the middle of the case.

Note that the power cord goes above the motherboard only because the full length riser takes too much space here and the drive mount in prototype IV did make things hard around SFX-L to manage properly.

Also the red sata cable is the only cable tucked in in that gap above modular connectors so it's not a huge mess out there.

After few long, up to 3 hour gaming sessions on such config I can say for sure that this drive placement is quite okay for the temps of both gpu and the drive. This hard drive temperatures were 42 degrees max while the gpu cooling was unaffected.This config should work okay with blower type coolers on 10.5" and shorter cards so you still can have two 2.5" drives mounted with SFX-L.

You might've noticed that we're planning to rearrange the drives in the central mount from the image in recent post:

Little update:

For optimal use in that drive's configuration you'll need a single cable with more than 20cm distance between two sata power connector since the new drive mount makes it so. Also ribbon type cables might make it easier to manage the whole build.
 
Last edited:
Looks gorgeous; heck, it looks better than my Raven RVZ01 does.

Interesting that both the GPU and HDD are okay with that configuration; I would have guessed that there would have been major heat issues from it.

Just out of curiosity, why did you cover the GPU intake rather than the PSU fan? Was there reasoning behind it, or was it just what showed off the case's options the best?
 
Interesting that both the GPU and HDD are okay with that configuration; I would have guessed that there would have been major heat issues from it.
The thing is, we're not blocking the intake of 10.5" with this drive, it only goes into like 25-30% of the intake's diameter so it's blocking roughly counting 15-20% of intake area so its doing nothing. At the same time this type of card doesn't have the pcb and components around the blower so there's literally no radiation coming from there.

I think that this might be kind of little hotter for full length cards that have something above it, especially the open air coolers

IJust out of curiosity, why did you cover the GPU intake rather than the PSU fan? Was there reasoning behind it, or was it just what showed off the case's options the best?

I'm not sure what you mean by "cover"? The photo isn't shopped and this type of blower cooler for 970 doesn't take air from the back. I have no idea what are you talking about if that doesn't respond to what you asked :)
 
Last edited:
Interesting that both the GPU and HDD are okay with that configuration; I would have guessed that there would have been major heat issues from it.

You're right. Usually HDD and SSD manufacturers say their products can work up to 70 deg C and some :))) case manufacturing companies thinks that SSD still working in 60 deg C is fine, because it's below the line of 70 C. Well.. we think that is wrong. Please think about car tires. All of them have a "speed index". It means, for example, some tires can work up to 190 km/h. What Do You think, which tires will work longer: those working all the time at speed about 170km/h or those which are occasionally used on a highway with 140 km/h?


If You want to know more about such problems, please go to Necere's LRPC (S1) case topic. Right now he has similar issue which he is fighting with. Luckily in our design we don't have such problem.
 
I'm not so sold on higher temperatures being a problem, particularly for SSDs. They're not subject to mechanical degradation, NAND failure is only very weakly correlated with failure rate under the breakdown threshold, and controller failures are more down to firmware bugs than tunnelling effects (which again, occur above a temperature threshold). For HDDs, [urk=http://research.google.com/archive/disk_failures.pdf]Google's drive failure trends study[/url] didn't find any strong correlation between temperature and failure rate, though they had few drives above 45°C.
For SSDs, I'd treat them like processors as they're produced on very similar silicon processes: 70°C to 80°C is acceptable. For HDDs, whatever the manufacture specifies as the maximum operating temperature for the rated lifetime is what those drives get torture-tested to (e.g. the venerable DT01ACA300 is rated to 60°C with no duty cycle caveats). Cooler temperatures may or may not prolong life after 3 years of constant use, but I wouldn't trust ANY HDD to be in active use after 3 years regardless of operating conditions.

Personally, I think the intermittent 50°C HDD temperatures Necere saw are entirely acceptable operating conditions for a consumer case. If the drive were hitting 50°C under normal operation that would be a worry, but full GPU load is not going to be a steady state operating condition.
 
I'm not so sold on higher temperatures being a problem, particularly for SSDs. They're not subject to mechanical degradation, NAND failure is only very weakly correlated with failure rate under the breakdown threshold, and controller failures are more down to firmware bugs than tunnelling effects (which again, occur above a temperature threshold). For HDDs, [urk=http://research.google.com/archive/disk_failures.pdf]Google's drive failure trends study[/url] didn't find any strong correlation between temperature and failure rate, though they had few drives above 45°C.
For SSDs, I'd treat them like processors as they're produced on very similar silicon processes: 70°C to 80°C is acceptable. For HDDs, whatever the manufacture specifies as the maximum operating temperature for the rated lifetime is what those drives get torture-tested to (e.g. the venerable DT01ACA300 is rated to 60°C with no duty cycle caveats). Cooler temperatures may or may not prolong life after 3 years of constant use, but I wouldn't trust ANY HDD to be in active use after 3 years regardless of operating conditions.

Personally, I think the intermittent 50°C HDD temperatures Necere saw are entirely acceptable operating conditions for a consumer case. If the drive were hitting 50°C under normal operation that would be a worry, but full GPU load is not going to be a steady state operating condition.

Thanks for the paper link, I'll check it out.

From what I understand the problem Necere has is that he did hit 50 degree mark quickly on drives and he stopped test at this point. That's at least what I've understood from what he wrote, I might be wrong here, but if I'm right that means his temps would go even higher in prolonged loads. At the same time we have stable 46 degrees under prolonged loads in gaming for the drives on the central mount and 42 degrees on drive mounted under the gpu blower intake.

As for what I know from my experience the problem is not about shortening the lifetime of hard drive but increasing the read error occurrences which in games might mean stuttering and framerate drops during data streaming from local storage.
 
I'm not so sold on higher temperatures being a problem, particularly for SSDs. They're not subject to mechanical degradation, NAND failure is only very weakly correlated with failure rate under the breakdown threshold, and controller failures are more down to firmware bugs than tunnelling effects (which again, occur above a temperature threshold). For HDDs, Google's drive failure trends study didn't find any strong correlation between temperature and failure rate, though they had few drives above 45°C.
For SSDs, I'd treat them like processors as they're produced on very similar silicon processes: 70°C to 80°C is acceptable. For HDDs, whatever the manufacture specifies as the maximum operating temperature for the rated lifetime is what those drives get torture-tested to (e.g. the venerable DT01ACA300 is rated to 60°C with no duty cycle caveats). Cooler temperatures may or may not prolong life after 3 years of constant use, but I wouldn't trust ANY HDD to be in active use after 3 years regardless of operating conditions.

Personally, I think the intermittent 50°C HDD temperatures Necere saw are entirely acceptable operating conditions for a consumer case. If the drive were hitting 50°C under normal operation that would be a worry, but full GPU load is not going to be a steady state operating condition.

Thanks for Your support. I read this document and according to it, when the temperature rised to an edge, than also a probability of damage also rises (probability of errors). If You are in high temperature range everything is ok, but when You break a specific temperature-line than failures may occur. I understand that author said that there is only a little corelation between temperature and failure probability, but pls check in which range their drives were. For them temperatures over 45 deg C are "very high".

What is more, author says, drives older than 3 years can also be damaged when the average temperature rises over 40 or 45 deg C. We know that many people have drives which are older than 3 years and according to this report 50 deg C is a killing temperature. In this case i understand why Necere wants to solve this problem, even when he has 50 C while his drives can work with 60 or even 70 deg C.
 
Last edited:
That looks pretty darn clean, I like it! The 970 is a great fit for this case, I'd even expect open air coolers to run damn cool with that HDD in the front.
 
That looks pretty darn clean, I like it! The 970 is a great fit for this case, I'd even expect open air coolers to run damn cool with that HDD in the front.

It takes some time to put that all inside but it looks okay. It will be easier and should look even better in final version when we'll hide the power cord and maybe I'll figure out how to hide those sata cables under the 5cm pci-e riser.

Also here's the RVZ02 "cable management" for comparison :)
Image_19S.jpg
 
I'm not so sold on higher temperatures being a problem, particularly for SSDs. They're not subject to mechanical degradation, NAND failure is only very weakly correlated with failure rate under the breakdown threshold, and controller failures are more down to firmware bugs than tunnelling effects (which again, occur above a temperature threshold). For HDDs, [urk=http://research.google.com/archive/disk_failures.pdf]Google's drive failure trends study[/url] didn't find any strong correlation between temperature and failure rate, though they had few drives above 45°C.
For SSDs, I'd treat them like processors as they're produced on very similar silicon processes: 70°C to 80°C is acceptable. For HDDs, whatever the manufacture specifies as the maximum operating temperature for the rated lifetime is what those drives get torture-tested to (e.g. the venerable DT01ACA300 is rated to 60°C with no duty cycle caveats). Cooler temperatures may or may not prolong life after 3 years of constant use, but I wouldn't trust ANY HDD to be in active use after 3 years regardless of operating conditions.

Personally, I think the intermittent 50°C HDD temperatures Necere saw are entirely acceptable operating conditions for a consumer case. If the drive were hitting 50°C under normal operation that would be a worry, but full GPU load is not going to be a steady state operating condition.

As promised I did read the paper you linked and I think that you can't make that conclusions from it.

Actually I also don't think that conclusions in that paper are proper because they didn't take into account the temperature deltas at all, they just logged the averages. The temperature delta itself is what's shortening lifetime of many things and that's obvious from the point of view of any properly educated engineer. And they didn't take that into account at all!

The problem with this study in terms of hard drive lifespan is that they're testing hard drive population in 24/7 workflow over the whole lifetime which indicates that from first mount and "burn in" the drive doesn't go under let's say 35 degrees and it's operation range stays between that and 45 degrees which is optimal. Meanwhile standard home user will turn off his pc and drive will cool down to 18-22 degrees so it's delta 30 degrees to 50 degree temperatures instead of 15 degree delta between 35 and 50 degree marks.

Also what's noteworthy is that they didn't log the age of "infant" drives that died under high load since those were be noted after days or even weeks for replacement as they stated somewhere up there. That would be the perfect proof of the temperature delta's killing the drive if those died within first day of operation. Why? Because when you replace faulty a drive in a matrix with a new one you straightly stress it out and burn in just by rebuilding the matrix and replicating whole data from the mirror drive.

That is for the whole drives getting bricked and usable and as I said before that doesn't take performance drops and dropped read cycles into account.
 

Thanks for the link but once again those are really not what are we talking about:

Look at this:

blog-temp-totals.jpg


And also the highest average temperature in drive model table was 30.5 degrees celsius.

This simply means that in their tested range from 14 to 38 degrees the temperature doesn't matter. If all drives by average run around 30 degrees then the datacenter itself was quite well cooled.

Once again - datacenter conditions and test conclusions are good info for drives in server, not in a home computer use since datacenter drives work 24/7 in small temperature deltas rather than going on and off and heating up from 18 degrees to 50 and more.
 
Unfortunately, only datacenters have enough of a population to generate useful reliability data.
 
Unfortunately, only datacenters have enough of a population to generate useful reliability data.

Yeah, that's the problem with those researches in context of home pc's.

The only real data that would help would be from some corporation that doesn't use pre-built computers with high-end cooling like HP/DELL but low budget components and I think that combo is also quite unlikely to get info on.
 
Unfortunately, only datacenters have enough of a population to generate useful reliability data.

...and that's why we have to assume what range of temperatures is acceptable for the long life of hdd and ssd drives in a standard pc environment. If a manufacturer says his hdd can work up to 60 deg C (most of produced today 2,5'' hdds can work up to this temp.), than we're trying to keep the temperature as low as possible. Temperature over 50 deg C can shorten Your hdd lifetime.

There is a little bit different situation with drives like SSD. Most of the manufacturers claim, that their drives can work up to 70 deg C. It means, even temperatures over 50 C shouldn't be a problem for them. But as You know, SSDs aren't as cheap as HDDs, so we assume, some of the customers will use a case in a configuration 1xHDD+1xSSD, and we have to be ready for such situation. It means, we want to keep internal HDD temperature below 50 deg C. And not because of that HDD won't work in temp's over 50. It will, but for how long? You have to also remember about what my bro said. It's not all about the temperature by itself, but more important are those huge deltas which can change frequently.
 
I can only provide anecdotal evidence rather than quantitative, but the building I support has a good 6,000-7,000 actively used workstations, evenly split between SFF single-socket machines and larger dual-Xeon monsters (the dual-CPU workstations are crammed 2-3 per desk in 'self-ventilated' cabinets that mainly just recycle their own exhaust). None of the machines have dedicated fans or ducted airflow for the drive bay. All came installed with 2.5" Velociraptor drives as standard. There has been no noticeable difference in failure rate between the hotboxed workstations and the open-air SFF boxes, with the drives in the dual-CPU workstations getting hot enough to be painful to handle. Enforced powerdowns mean at least one power-cycle (and thus thermal cycle) per week, with most machines without an exception doing one cycle per working day.
In terms the the observed failures, actual mechanical failure of head or hub mechanisms is vanishingly rare (I'd estimate about 0.5% or less), and drive controller failures (e.g. failure to detect the drive in BIOS) about 10%-20%. Remaining 'failures' are mostly attributed to corruption, but the one-strike-and-out rule prevents checking whether these were software related or something wrong on the platter.
Very few of these workstations are used for storage-intensive tasks, and for the few whose work isn't effectively "check email and write Word documents", the load is pretty bursty. I'd say they were a good analog for most home use, apart from gaming (none of these machines have high performance GPUs).
 
I can only provide anecdotal evidence rather than quantitative, but the building I support has a good 6,000-7,000 actively used workstations, evenly split between SFF single-socket machines and larger dual-Xeon monsters (the dual-CPU workstations are crammed 2-3 per desk in 'self-ventilated' cabinets that mainly just recycle their own exhaust). None of the machines have dedicated fans or ducted airflow for the drive bay. All came installed with 2.5" Velociraptor drives as standard. There has been no noticeable difference in failure rate between the hotboxed workstations and the open-air SFF boxes, with the drives in the dual-CPU workstations getting hot enough to be painful to handle. Enforced powerdowns mean at least one power-cycle (and thus thermal cycle) per week, with most machines without an exception doing one cycle per working day.
In terms the the observed failures, actual mechanical failure of head or hub mechanisms is vanishingly rare (I'd estimate about 0.5% or less), and drive controller failures (e.g. failure to detect the drive in BIOS) about 10%-20%. Remaining 'failures' are mostly attributed to corruption, but the one-strike-and-out rule prevents checking whether these were software related or something wrong on the platter.
Very few of these workstations are used for storage-intensive tasks, and for the few whose work isn't effectively "check email and write Word documents", the load is pretty bursty. I'd say they were a good analog for most home use, apart from gaming (none of these machines have high performance GPUs).

It's a good amount of computers but did you log the actual internal temperature of those drives? If you did then please tell us how did you log them and how did you analyse it. Note that Velociraptor drives have their own cooling radiators and that wouldn't be just made for fun by the manufacturer, you know.

Note that a lot of drives are properly reporting their errors to the SMART data info and that google paper also reported. We've got a lot smaller amount of workstation at our office but I'm storing and checking out old replaced drives some times and I can say for sure that SMART data doesn't give any reliable and usable info except for cycle counts and reallocated sectors. I've got like twenty 5+ years old drives that barely POST and have perfectly clear SMART data so analysing that is useless.

Also self ventilated SFF doesn't mean the cpu cooler + psu fan won't induce the airflow by themselves.
 
Last edited:
Note that Velociraptor drives have their own cooling radiators and that wouldn't be just made for fun by the manufacturer, you know.
Oddly, whether the integrated passive heatsink is used varies from model-to-model and manufacturer-to-manufacturer, regardless of chassis size or design. Many will have only a thin 2.5"-3.5" converter bracket rather than the heatsink.

Unfortunately I don't have access to temperature logs, and likely they aren't kept (and definitely not centralised and easily accessible; data compartmentalisation issues).
 
The thing is in standard bricky case configurations you have power supply sucking out the hot air from cpu cooling. If you don't have power hungry gpu's like you said then you don't really need much airflow in there and components won't heat up so much. Assuming you've got entry level quadro cards there, those might be between 35 - 55W so it's not a lot of heat to dissipate.

Anyway without the temperature data there is no valid basis to assume that those drives were working in high temperatures.

Also I'd like to get back to that we did talk about missed/dropped cycles under high temps rather than drive failures. I'm more afraid of making a case that will end up heating up drive which makes the games drop fps and system stutter or crash because of temps while there's no clear indication that it's the problem with drives.
 
its a bit older now but im going to ask again. do you plan to put plastic plugs on both sides now ? or will the finalized standing solution be the same the prototype has. which actually would me bore than fine.
 
Why would we put the plugs on both sides if the stand holes are only on one side? For the symmetry?
 
i acutally have no idea :D thats what i understood, and it kinda stuck to me. happy youre not going to, i like the screws:)
 
So we've been trying to figure out how to make the case support more power hungry cards with open air cooling and we've end up with one solution:

Since in vertical position the open air cooling had no problems we've figured we need to let the gathering hot air escape above the gpu in desktop position as well.

Because of that we need to decide on how to design this hot air outlet. We have prepared three designs to talk about.

1) Default outlet, same as all other vent areas



2) Large Sentry text instead of default holes



3) Small Sentry text within default holes



Here's the straw poll for this:
http://strawpoll.me/5706717

Let us know what you think
 
Last edited:
I can't stand the first option.

The large text actually looks good to me, but I would be concerned about it structurally.

Do you think they'll be a way to order one without those holes cut out, or would that incur costs from the machinists? I wouldn't think it would, but...
 
What's so bad about it? First option looks quite neutral in both configurations, I think.

As for the text - we will figure out how to do it properly, we're just checking out the design ideas now

We'd like to define one design for all units rather than adding more options to them. You need to understand that it's for the thermal purposes so in the end it will give you better cooling of the card.
 
I'm ok with both 1 and 2, but not 3.

I think my ideal would be design 1 with 1 set of vent slots turned 90 degrees.
 
You mean 1 row of the holes angled symmetric to the vents over psu/motherboard? I think 1 row might not be enough and I don't think it'll look good
 
What I meant is to flip all the holes over the PSU/motherboard OR all the holes over the GPU.
 
Last edited:
What's so bad about it? First option looks quite neutral in both configurations, I think.

As for the text - we will figure out how to do it properly, we're just checking out the design ideas now

We'd like to define one design for all units rather than adding more options to them. You need to understand that it's for the thermal purposes so in the end it will give you better cooling of the card.

Fair enough, though only if it's horisontally configurated with an open air graphics card.

I absolutely understand wanting to have one design.

My issue with the first one is twofold; one, holes show what's underneath and let dust through (even with filters) and two, it just looks too busy. At that point there's nearly more hole than metal.

Here's my question, I suppose... Insatiable urge to over-engineer aside, how often will this use-case come up? Is it going to be often enough, with a serious enough issue, to merit the changes?

My answer is probably not. Even in the problematic use-case, temperatures are better than most other cases in this segment, _plus_ the number of people using it horisontally with miniature furnaces for graphics cards is not going to be huge... Nor are the temperatures enough of a problem to say that it's a problem. If one wants to use the case in a way that restricts airflow slightly with a graphics card that's an airflow hog, that's fine, but expect a few % premium for it.
 
I've been following this one for a while but first post. Please go with the default design or at least have the option to get a replacement panel without the text. The clean industrial aesthetic of this case will be ruined by slapping its name prominently on the side.

Is there any update on when this will be available to order? Any other colours planned?
 
Fair enough, though only if it's horisontally configurated with an open air graphics card.

I absolutely understand wanting to have one design.

My issue with the first one is twofold; one, holes show what's underneath and let dust through (even with filters) and two, it just looks too busy. At that point there's nearly more hole than metal.

Yeah, I understand that. I actually did test the visibility of interior through the filter and there's barely anything visible this way so that's not a problem.

I'd also like it to not have those holes if possible. I'm trying to figure this out to look the best way possible while solving other issues.

Here's my question, I suppose... Insatiable urge to over-engineer aside, how often will this use-case come up? Is it going to be often enough, with a serious enough issue, to merit the changes?

My answer is probably not. Even in the problematic use-case, temperatures are better than most other cases in this segment, _plus_ the number of people using it horisontally with miniature furnaces for graphics cards is not going to be huge... Nor are the temperatures enough of a problem to say that it's a problem. If one wants to use the case in a way that restricts airflow slightly with a graphics card that's an airflow hog, that's fine, but expect a few % premium for it.

While it looks like we could drop this idea in general since as you've seen in the test results the performance was quite okay, the problem is the case cover is getting hot.

And actually it's a quite common use-case - it fixes this thermal problem with open air cooled cards, single slot ones and also can give some slight performance boost to the blower cards as well. Open air coolers are more common on the market than blower types since they have bigger low rpm fans and are quieter than the reference blowers and alike.

During last week I've been testing different configs with use of small fans and while the results were great with blower type and single slot card, it wasn't working with oper air cooled card and even really silent 6cm fan would once in a while resonate generating chirping noise. Cooling the back side of the card also gave some minor boost clock increase on prolonged loads.

I've been following this one for a while but first post. Please go with the default design or at least have the option to get a replacement panel without the text. The clean industrial aesthetic of this case will be ruined by slapping its name prominently on the side.

Is there any update on when this will be available to order? Any other colours planned?

I think we'll be ordering the black pre-final prototype somehow this week. We need to figure out this vent, update designs and do some minor tooling for welding positioning and then we're ready to order.

If everything goes smooth we should have the final design and we'll order the final white unit that we'll send to linustechtips for a review.

From that point all we need is to be prepared to process the orders so hopefully we'll be ready for pre-orders before Christmas and start production after gathering orders.
 
I like 1 the best, I think it actually looks better than the previous panel without the holes. Somehow I can't cast my vote though, captcha's not working for me. Will try again later.
 
I think it should be either the default holes or completely different detail such as huge text but at the same time Zombi like the combo of default inlets with some text inside.

For me the text within the default outlet looks okay on vertical stand but it completely breaks composition in desktop position since every default vent is slanted and the text isn't. And It wouldn't look good to slant those in opposite direction to make the text italic either because then it looks bad on a stand.

At this point it's 21 vs 19 or 21 vs 20 if iFreilicht didn't vote yet, so it's quite balanced.

It's good to know that most of you didn't just oppose the idea of additional venting changing the looks a little for the sake of thermal performance.
 
Just voted :D actually had to use IE because Vivaldi couldn't make the captcha work.
 
I think that depends on design of that vent. Text can be part of the design but it also can tastelessly break it.
 
How do I get on a list to be the first 100-200 orders? I have all the parts for my Steambox build..just holding out on a case like this.
 
We're not going to limit the batch. If we get for example over 1000 orders then it'll be processed sequentially as fast as we can process it piece by piece.

So most likely how fast you get it will be depending on how fast were you to buy it (the number of your order)

From our perspective if we get huge amount of orders then we can get some help and hire some temps to speed things up but we're not sure about how fast the metal parts can be manufactured so we assume batch by batch processing and shipping.

Anyway If you're here then you're most likely to be in that initial batch anyway.
 
Excellent...any estimate on timeframe? Everything looks great btw!
 
Back
Top