games with 16gb system requirements...fact? or fiction?

coho66

n00b
Joined
Nov 8, 2012
Messages
1
i've noticed that there is a growing list of games that are BOLD enough to state system requirements that break the 8gb system memory barrier.

According to the developers, It appears that

Gears of war 4
Gears of war ultimate edition
Titanfall 2
Star wars battlefront
Recore
Dying light
Battlefield 1
Deus ex mankind divided
Quantum break
Dishonored 2
Fallout 4 w/high rez pack
Mirrors edge catalyst
Forza motorsport 6 apex
Halo 5: forge

all require 12-16gb of system memory to perform at the ideal.

Putting this into perspective with build budgets and system configs, it'd be interesting to know if one actually needs 16gb, and if so, how much of a difference it makes. I tried to find information that could confirm these requirements, but results are spotty and certainly nothing from the same source.

I realize that in recent times with the cost of memory, most progressives/enthusiasts just suggest or incorporate 16gb as a precautionary measure. But, curiosity is knocking, and sometimes it's nice to have numbers to back things up. I mean, people seem to compare 4gb/8gb or 3gb/6gb variants of graphics cards all the time, so...it'd just be interesting to see how system memory affects performance in games that actually say they requirement. Is 16gb the new 8gb yet?

Unless someone can point me to a source out there that discusses this already, I'd suggest it as a potential article topic.
 
It is because developers do not trust regular people to determine how much 'free' memory they have available to the game after the OS and all the applications still running in the background (Office Suites, PDF readers, Web Browsers, etc.).
 
yup some do actually "require" it but I think its a combo of the above comments. SW:B on my sig system playing 1080p/60/ultra give me a total of just over 9GB used. at idle its around 2Gb. so the game only really needs 7. but if someone only had 8Gb theyd run into issues. its better to over spec then under...
 
i was a bit surprised by this myself as i have not seen high utilization with 8gb of ram ... but my next builds are all going to 16gb
 
guess it really depends on the games i have seen a few games use 8-12gigs of memory.
 
lol he claims single channel makes no difference and that its the extra ram but the only thing that is making a difference there is single channel vs dual channel due his gpu only having 1g vram its overflowing into the ram making it very reliant on the rams bandwidth
stupid benchmark if your using more vram than you have you should drop texture detail or buy a new gpu not more ram

i tested a few of the games mentioned by op deus ex\dying light and there only using ~3-4gb thats not to say they may not be able to use more in some parts
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
if you may upgrade the cpu in another year or so i would just make do with your current ram for a while longer if its for games just keep your os well tweaked without background apps and provided your gpu doesnt run out of vram you wont see a performance difference other than maybe the odd game that really does put you over the limit in which case you may get a few stutters unless you drop texture detail a notch

2x8g can be easier to run but since your only running 1600mhz c9 four sticks shouldnt be a problem unless you come across some weird compatibility issue between your old kit and the new
 
Bf1 does that fps drop occasionally. I have a 980 amp. Usually get 120 fps but then it will dip randomly. Otherwise everything runs great still.
 
At current and recent memory prices that's not entirely unreasonable.

That's where I stand on it, building today.

I ran with 2x16GB just due to a photography habit and seeing some Photoshop usage pushing the 16GB I was using earlier; before the RAM price spikes, my kit came in at ~US$160 for DDR4-3000. Hard to pass that up, given how long we keep this stuff.

Absent some kind of heavy lifting, though, 2x8GB is where it's at these days.
 
I fly DCS Simulator, Digital Combat Simulator, by Eagle Dynamics and things have changed considerably. We now use 16GB across the board to be able to fly without jitter, lag, stutter etc... The recommended RAM is 16GB, minimum is 8GB. The DCS Forum is very active and many sim pilots coming to DCS or coming back to it have severe issues with 8GB. It also effects stability in some aspect.

As I also use VMware / HyperV from time to time I at least opted for 32GB on this new DCS rig, just to be safe. I have flown it test-wise with as low as 6GB ( wouldnt even load sometimes and loading the map takes ...ahhh..go and get a coffee while loading...kinda thing ). 8GB is so lala, dont expect much, it is labeled as MINIMUM.

VRAM is about the same, there are not many games or simulators that really need 12GB Tesla VRAM and make use of it...well..DCS would use a 16 or 24GB VRAM card if there was one. The maps are so huge ( Kaukasus and Nevada-TTR, soon Normandie ) and detailed that VRAM explodes, especially if you fly VR O_O !


Usually, MSI Afterburner shows me 11-12GB RAM usage when flying online on servers, about same or a bit more for swap-file usage. My 980GTX VRAM is TILT, 100% VRAM and 99% GPU. It's also IPC based, so you rather come along with a few FAST cores rather than MANY modest cores !

It's FREE btw ! DL it and have a try :) Joystick/Throttle/Rudders HIGHLY recommended

https://www.digitalcombatsimulator.com/
 
It's more of a matter of principal. Maybe that's just the old school coder frame of mind.

I understand the mindset. I've been playing PC games, working on PC's and building them for just over two decades now. Back in the late 1990's it was a different time. For that matter, before that it was a different time. When the platform architecture is limited in how much memory it could address developers had to learn to use as little memory as possible. Back when I had 16MB of RAM in my system when most people had 4MB, it cost me well over $400 to get that. A few years later I remember paying similar pricing for a 128MB DIMM with my employee discount and being happy about it. Today, our technology isn't so different. 20 years ago most systems had around 4MB to 8MB with enthusiasts starting at 8MB and having 12 to 16. These days you can pretty much take the same numbers and change the "M" to a "G" and its about the same break down for game requirements and system averages.

In the 1984-1985 season of the classic 80's TV show Knight Rider, there is an episode called "Lost Knight". In the episode, KITT is damaged and needs replacement memory. They aren't specific about the type of storage or memory it is, but they state the module that needs to be installed has a capacity of 5,000MB. I bring this up because that number sounded absolutely outrageous as though it were pure fiction. Today, its hard to imagine doing anything with 5GB of anything.

That's where I stand on it, building today.

I ran with 2x16GB just due to a photography habit and seeing some Photoshop usage pushing the 16GB I was using earlier; before the RAM price spikes, my kit came in at ~US$160 for DDR4-3000. Hard to pass that up, given how long we keep this stuff.

Absent some kind of heavy lifting, though, 2x8GB is where it's at these days.

These days RAM is cheap. It has been for awhile except when we change RAM standards. DDR3 was cheap for a very long time. DDR4 isn't as cheap, but it's not outrageous either. 16GB of RAM is reasonably cost effective to obtain.

I fly DCS Simulator, Digital Combat Simulator, by Eagle Dynamics and things have changed considerably. We now use 16GB across the board to be able to fly without jitter, lag, stutter etc... The recommended RAM is 16GB, minimum is 8GB. The DCS Forum is very active and many sim pilots coming to DCS or coming back to it have severe issues with 8GB. It also effects stability in some aspect.

As I also use VMware / HyperV from time to time I at least opted for 32GB on this new DCS rig, just to be safe. I have flown it test-wise with as low as 6GB ( wouldnt even load sometimes and loading the map takes ...ahhh..go and get a coffee while loading...kinda thing ). 8GB is so lala, dont expect much, it is labeled as MINIMUM.

VRAM is about the same, there are not many games or simulators that really need 12GB Tesla VRAM and make use of it...well..DCS would use a 16 or 24GB VRAM card if there was one. The maps are so huge ( Kaukasus and Nevada-TTR, soon Normandie ) and detailed that VRAM explodes, especially if you fly VR O_O !


Usually, MSI Afterburner shows me 11-12GB RAM usage when flying online on servers, about same or a bit more for swap-file usage. My 980GTX VRAM is TILT, 100% VRAM and 99% GPU. It's also IPC based, so you rather come along with a few FAST cores rather than MANY modest cores !

It's FREE btw ! DL it and have a try :) Joystick/Throttle/Rudders HIGHLY recommended

https://www.digitalcombatsimulator.com/

One thing you have to keep in mind is that memory usage doesn't necessarily mean that said usage is required, nor necessarily helpful in regard to performance. Some applications (SQL for example) will use all the resources you allow it to whether they are needed or not. Similarly, CoD4:MW Remastered has an option in the menu to fill empty video RAM. The game doesn't need to do this, but it's an option.
 
One thing you have to keep in mind is that memory usage doesn't necessarily mean that said usage is required, nor necessarily helpful in regard to performance. Some applications (SQL for example) will use all the resources you allow it to whether they are needed or not. Similarly, CoD4:MW Remastered has an option in the menu to fill empty video RAM. The game doesn't need to do this, but it's an option.


Yes, true for some. Just as I said, if you try DCS with less than say 12-16GB you WILL run into issues. Same goes to VRAM, minimum I estimate to run somehow troublefree unless you turn it all down is 4GB for my 1440p resolution. The maps are so huge and the modells so detailed it really cries for memory.

Thing is, if it doesnt work perfect it sux ! Having some jitter or stutter in any flying machine, especially helicopter when hovering...and you are lost or at least having a very hard time compared to someone with high fps and no stutter.


From all the games on this rig ( my sone has a ton of them on it ) DCS is the most demanding when it comes to GPU, and it wont really support XF or SLI properly. The CPU is also only using 1 core for the core game engine, 1 for sound and 2 for DX11 feeding the GPU. If it would use 2 or more cores for the engine my attitude towards Ryzen would also drastically change, for now highest IPC with 4 cores is King of DCS, anything else doesnt matter as the other 3 cores ( sound + DX11 ) are not driven up to the max usually but he 1 for all the rest is.

This wouldnt need to be if they would code better, make a sleeker code as said above by Dan_D like and make it natively spread across cores by adressing this right at the beginning of coding. Adding it later on is a pain and it most cases doesnt happen as the half life of modern AAA games is like 6-12 month max until some other block buster game takes over. DCS whereas is also different in this aspect, more like FSX, maintained for a very log time now with consecutive updates and upgrades, just no one from the devs thinks SMP is an advantage.

I dunno, maybe they dont fly their own sim online, with 50 others ?? No one from the pilots site doubts we need more cpu power and since cores' IPC doesnt increase anymore as in the past SMP seems to be the only way out.


I would rather not need to oc up to the limit to fly stutterfree but use more cores at an affordable price ( Ryzen again ). Till then, COOL THAT SUCKER & CLOCK IT SKY HIGH :D
 
Yes, true for some. Just as I said, if you try DCS with less than say 12-16GB you WILL run into issues. Same goes to VRAM, minimum I estimate to run somehow troublefree unless you turn it all down is 4GB for my 1440p resolution. The maps are so huge and the modells so detailed it really cries for memory.

Thing is, if it doesnt work perfect it sux ! Having some jitter or stutter in any flying machine, especially helicopter when hovering...and you are lost or at least having a very hard time compared to someone with high fps and no stutter.


From all the games on this rig ( my sone has a ton of them on it ) DCS is the most demanding when it comes to GPU, and it wont really support XF or SLI properly. The CPU is also only using 1 core for the core game engine, 1 for sound and 2 for DX11 feeding the GPU. If it would use 2 or more cores for the engine my attitude towards Ryzen would also drastically change, for now highest IPC with 4 cores is King of DCS, anything else doesnt matter as the other 3 cores ( sound + DX11 ) are not driven up to the max usually but he 1 for all the rest is.

This wouldnt need to be if they would code better, make a sleeker code as said above by Dan_D like and make it natively spread across cores by adressing this right at the beginning of coding. Adding it later on is a pain and it most cases doesnt happen as the half life of modern AAA games is like 6-12 month max until some other block buster game takes over. DCS whereas is also different in this aspect, more like FSX, maintained for a very log time now with consecutive updates and upgrades, just no one from the devs thinks SMP is an advantage.

I dunno, maybe they dont fly their own sim online, with 50 others ?? No one from the pilots site doubts we need more cpu power and since cores' IPC doesnt increase anymore as in the past SMP seems to be the only way out.


I would rather not need to oc up to the limit to fly stutterfree but use more cores at an affordable price ( Ryzen again ). Till then, COOL THAT SUCKER & CLOCK IT SKY HIGH :D

Your understanding of game engines and game development is flawed. I don't claim to be an expert on the matter, but as I understand things It simply doesn't work that way. Even if you could separately set different game tasks' affinity for a specific processor, it wouldn't be equal work. If anything, coding a game in this manner would hinder performance more than it would help it. Even within multithreaded applications, sometimes clock speed or IPC wins out. We've shown this in numerous benchmarks over the years. We've also shown that even multi-threaded game engines that can scale up to 8 threads still become GPU limited when you push past certain resolutions.

You can't just simply use moar cores and call it done. Not every work load will task cores equally, nor is every workload suited to multithreading. It isn't a matter of developers making a simple choice. Let's use your example of 1 core on the game engine, 1 core for sound and 2 for DX 11. First off, that's not how games are designed but lets go ahead with the example. First and foremost, the game engine itself covers physics and other aspects of the game world that aren't necessarily graphical in nature. AI behavior, positioning of every object in the game, hit detection, calculations for damages, etc. are all part of that. This is even over simplification of the game engine and what exactly it does. These tasks are much more difficult than audio processing would ever be. Audio to a modern processor isn't much of a chore at all. Audio, especially as it is today isn't anymore demanding than it was several years ago. Dedicating one core to this would be a waste of resources. A game not needing more cores isn't the same thing as the game not properly being able to leverage more of them. Sometimes, an application workload simply doesn't benefit from more resources because it doesn't actually need them.

As for SLI, games not taking advantage of the feature often does come down to lazy ass fucking developers. This problem will only get worse as DirectX 12 essentially puts multi-GPU rendering onto the shoulders of developers instead of AMD or NVIDIA. Developers aren't going to bother to implement explicit multi-GPU or any multi-GPU rendering techniques because it doesn't make any sense to do so. Most games are designed to be multiplatform and the PC is the far more powerful (potentially) platform of the bunch. Again I'm over simplifying, but if the developers create something that the consoles can run well, you do not need SLI or Crossfire to make it work or look good on the PC. As long as the PC isn't the lead platform pushing the envelope, you won't see multi-GPU being very useful going forward. It isn't that your game can't be made to use multiple GPUs, but rather the fact that the developers have no interest in doing the work to make it work. Prior to DX12, it was up to AMD and NVIDIA to some extent but by the same token, game developers had their share of work to do in order to make things work as well.
 


Seems to be a 5-10 fps difference.

What's interesting about that video is how some games are really conservative about how much ram they are using with 8GB even if there are a few GB remaining ... and then there is Black Ops III.

Overall, I'm really surprised by this. I really thought 8GB would be enough but it seems some companies are afraid to use the memory. Looks like 16GB is going to definitely be the minimal moving forward.
 
Your understanding of game engines and game development is flawed. I don't claim to be an expert on the matter, but as I understand things It simply doesn't work that way. Even if you could separately set different game tasks' affinity for a specific processor, it wouldn't be equal work. If anything, coding a game in this manner would hinder performance more than it would help it. Even within multithreaded applications, sometimes clock speed or IPC wins out. We've shown this in numerous benchmarks over the years. We've also shown that even multi-threaded game engines that can scale up to 8 threads still become GPU limited when you push past certain resolutions.

You can't just simply use moar cores and call it done. Not every work load will task cores equally, nor is every workload suited to multithreading. It isn't a matter of developers making a simple choice. Let's use your example of 1 core on the game engine, 1 core for sound and 2 for DX 11. First off, that's not how games are designed but lets go ahead with the example. First and foremost, the game engine itself covers physics and other aspects of the game world that aren't necessarily graphical in nature. AI behavior, positioning of every object in the game, hit detection, calculations for damages, etc. are all part of that. This is even over simplification of the game engine and what exactly it does. These tasks are much more difficult than audio processing would ever be. Audio to a modern processor isn't much of a chore at all. Audio, especially as it is today isn't anymore demanding than it was several years ago. Dedicating one core to this would be a waste of resources. A game not needing more cores isn't the same thing as the game not properly being able to leverage more of them. Sometimes, an application workload simply doesn't benefit from more resources because it doesn't actually need them.

As for SLI, games not taking advantage of the feature often does come down to lazy ass fucking developers. This problem will only get worse as DirectX 12 essentially puts multi-GPU rendering onto the shoulders of developers instead of AMD or NVIDIA. Developers aren't going to bother to implement explicit multi-GPU or any multi-GPU rendering techniques because it doesn't make any sense to do so. Most games are designed to be multiplatform and the PC is the far more powerful (potentially) platform of the bunch. Again I'm over simplifying, but if the developers create something that the consoles can run well, you do not need SLI or Crossfire to make it work or look good on the PC. As long as the PC isn't the lead platform pushing the envelope, you won't see multi-GPU being very useful going forward. It isn't that your game can't be made to use multiple GPUs, but rather the fact that the developers have no interest in doing the work to make it work. Prior to DX12, it was up to AMD and NVIDIA to some extent but by the same token, game developers had their share of work to do in order to make things work as well.


Dan,

thanks for your explanation of how you see those things. In regards to DCS you are mostly right, the sound core for example is underutilized..no doubt. The information that DCS uses up to 2 cores for DX11 rendering is from the devs, not my finding. I took that as true.
The thing with this 1 core for the engine is that, that it really is tilted on most CPU's if they arent top notch CPU's AND overclocked. Anything below 4.x GHz suffers in many scenarios where this 1 poor core just cant push it all. The BIG question is, what can help this ? Would SMP, if applied properly, have a chance to speed it up or would syncronisation between cores and functions calculated slow it down again as the cores would still depend on each other in one or more ways. The devs say, they looked into it and SMP is no option. That is an answer but no solution, that is 2 different pair of shoes.

What is the answer for games hitting the IPC wall ? Overclock even further ? LN4ALL ?

SLI has never really worked, the last SLI that I had that worked was called Voodoo2 and had 32MB each :D All others that followed gave more trouble than fun. I dont do SLI or XF again, seriously. Costs tons of money, energy consumption etc.. for almost no return in most scenarios.

DCS is not on console, PC only. It will never come to console, Mac or Linux, which I regret a lot. There are 2 versions actually, 1 for consumer market and 1 for industrial and military use only, tailored to your needs ( if you PAY ).


I would not hunt the IPC crown if it wasn't for this sim, I just love flying smoothly :D

edit*: The 1 reason why I have those hot RAM is that many in DCS forum say that FAST RAM helped them to overcome nasty hiccups in VR, with faster RAM than 2133 they got over the hurdle and smoothened it out.
All for this sim, I usually use a MBPretina 15" for all my daily needs, I dislike MS btw for many rreasons. Only use it for gaming.
 
Your understanding of game engines and game development is flawed. I don't claim to be an expert on the matter, but as I understand things It simply doesn't work that way. Even if you could separately set different game tasks' affinity for a specific processor, it wouldn't be equal work. If anything, coding a game in this manner would hinder performance more than it would help it. Even within multithreaded applications, sometimes clock speed or IPC wins out. We've shown this in numerous benchmarks over the years. We've also shown that even multi-threaded game engines that can scale up to 8 threads still become GPU limited when you push past certain resolutions.

You can't just simply use moar cores and call it done. Not every work load will task cores equally, nor is every workload suited to multithreading. It isn't a matter of developers making a simple choice. Let's use your example of 1 core on the game engine, 1 core for sound and 2 for DX 11. First off, that's not how games are designed but lets go ahead with the example. First and foremost, the game engine itself covers physics and other aspects of the game world that aren't necessarily graphical in nature. AI behavior, positioning of every object in the game, hit detection, calculations for damages, etc. are all part of that. This is even over simplification of the game engine and what exactly it does. These tasks are much more difficult than audio processing would ever be. Audio to a modern processor isn't much of a chore at all. Audio, especially as it is today isn't anymore demanding than it was several years ago. Dedicating one core to this would be a waste of resources. A game not needing more cores isn't the same thing as the game not properly being able to leverage more of them. Sometimes, an application workload simply doesn't benefit from more resources because it doesn't actually need them.

As for SLI, games not taking advantage of the feature often does come down to lazy ass fucking developers. This problem will only get worse as DirectX 12 essentially puts multi-GPU rendering onto the shoulders of developers instead of AMD or NVIDIA. Developers aren't going to bother to implement explicit multi-GPU or any multi-GPU rendering techniques because it doesn't make any sense to do so. Most games are designed to be multiplatform and the PC is the far more powerful (potentially) platform of the bunch. Again I'm over simplifying, but if the developers create something that the consoles can run well, you do not need SLI or Crossfire to make it work or look good on the PC. As long as the PC isn't the lead platform pushing the envelope, you won't see multi-GPU being very useful going forward. It isn't that your game can't be made to use multiple GPUs, but rather the fact that the developers have no interest in doing the work to make it work. Prior to DX12, it was up to AMD and NVIDIA to some extent but by the same token, game developers had their share of work to do in order to make things work as well.
What I've read is similar. Some game logic requires other threads to be completed or else the game is waiting. Some things just cannot be pawned off into secondary threads because it's vital for game operation. Stuff like sound isn't going to hurt gameplay if it's not perfectly aligned with the action.
 
Dan,

thanks for your explanation of how you see those things. In regards to DCS you are mostly right, the sound core for example is underutilized..no doubt. The information that DCS uses up to 2 cores for DX11 rendering is from the devs, not my finding. I took that as true.

I'd generally take this at face value if it came from the developers. What they are talking about is how the game engine handles rendering calls or what it does specifically as it interfaces with the DirectX API. There are elements to running a game that simply aren't done on the GPU. The GPU is very good at specific things and aren't as general purpose as a CPU.

The thing with this 1 core for the engine is that, that it really is tilted on most CPU's if they arent top notch CPU's AND overclocked. Anything below 4.x GHz suffers in many scenarios where this 1 poor core just cant push it all. The BIG question is, what can help this ? Would SMP, if applied properly, have a chance to speed it up or would syncronisation between cores and functions calculated slow it down again as the cores would still depend on each other in one or more ways. The devs say, they looked into it and SMP is no option. That is an answer but no solution, that is 2 different pair of shoes.

In a sense, you've answered your own question. When multithreading isn't a viable way to improve performance, higher clock speeds become more relevant. What you need to understand is just what I've said. When you run a game on a computer, the processing tasks are not as simple as "audio here" or "DX11 stuff here." The tasks required are far more numerous than you'd imagine. Using the DX11 API isn't just a graphics application. It's a combination of Direct3D, XInput, DirectCompute, XAudio2, etc. The game engine will make calls to DirectX for rendering, input etc. while you are gaming. Not all of these tasks require the same computing power and not all of them will end up running on the same processor core all of the time. These are distributed by what's available at the time. These things are far more dynamic than you realize.

Again, some specific tasks do not benefit from multithreading while others do. When something doesn't benefit from multithreading, all things being equal architecturally, higher clock speeds become the only way to improve performance.

What is the answer for games hitting the IPC wall ? Overclock even further ? LN4ALL ?

There is no IPC wall. IPC applies to both single and multithreaded performance. You can overcome IPC deficiencies in a specific CPU architecture with increased clock speeds. However this has almost never worked out because greatly increasing clock speeds comes with its own set of challenges. Simply put, greatly increasing a clock speed of a processor will cause horrendous power consumption and produce more heat than can be dissipated easily. The Intel Netburst Microarchitecture had lower IPC than it's P6 predecessor and lower IPC performance than the AMD CPUs of the day. Similarly, AMD's Bulldozer had higher clock speeds than it's Intel counterparts while being severely deficient in regards to IPC performance. In both cases clock speed advantages could not overcome the competition.

SLI has never really worked, the last SLI that I had that worked was called Voodoo2 and had 32MB each :D All others that followed gave more trouble than fun. I dont do SLI or XF again, seriously. Costs tons of money, energy consumption etc.. for almost no return in most scenarios.

I have had SLI since the Voodoo2 SLI and then from the 6800GT / Ultra days to now. I've had many 2-way, 3-way and 4-way SLI configurations over the years. I've also used AMD's multi-GPU offerings from the early days to recently. (Although with far greater difficulty.) Your statement is false. SLI has worked more often than not. That's not to say there aren't some challenges to working with it, but it works even now. Unfortunately, since game developers have gotten increasingly lazy and because console sales drive their development SLI is an after thought at best. I've played games where SLI was the only way to achieve acceptable performance levels. I've almost always found the purchase worth while. When I haven't, it's normally been AMD's broken drivers that made me think otherwise. Multi-GPU scaling past two hasn't always been worth the cost, so that's hit and miss for me.

I would not hunt the IPC crown if it wasn't for this sim, I just love flying smoothly :D

The IPC crown as you put it will not make a difference. While architectural design of a CPU is important, your GPU is far more important when it comes to any kind of gaming.

edit*: The 1 reason why I have those hot RAM is that many in DCS forum say that FAST RAM helped them to overcome nasty hiccups in VR, with faster RAM than 2133 they got over the hurdle and smoothened it out.

Some applications benefit from more memory bandwidth. This is not surprising.[/QUOTE]
 
Last edited:
Dan,

thanks again for your time and sharing your insight :D

You were lucky thern with your SLI setup, I had 2 of them after the Voodoo2 era and both didnt properly support the games I mainly played back then. Anyway, I tend to stick to 1 big card and avoid lots of problems many have with SLI and DCS. But some do use it for DCS with changing results.


This seems like a dilemma, some things dont make sense in SMP but IPC is not as high as we wished it was, so how we close this gap ? I guess only the devs can tune the code, rewrite parts etc.. to accomodate the code to the topology of HW mainly being used. This, takes a lot of time !
 
I'd say it's starting to be a problem, considering the same sticks are %20-30 more expensive now than they were 6 months ago. Stupid under-supply...
 
This seems like a dilemma, some things dont make sense in SMP but IPC is not as high as we wished it was, so how we close this gap ? I guess only the devs can tune the code, rewrite parts etc.. to accomodate the code to the topology of HW mainly being used. This, takes a lot of time !

Well, like much of this discussion it isn't cut and dry. IPC is what it is because Intel has to strike a balance between performance and power consumption. They could make the CPU's considerably faster but it would cost us greatly in some other way. Similarly, they could use less power and run much cooler but at the cost of performance. The gap can only be closed by improvements in technology or a change in market direction that allows the evolutionary priorities we may desire. Such as the PC gaming getting big enough to warrant a no holds barred desktop only CPU that isn't something recycled from the mobile or server markets. Unless our market got big enough to warrant it, Intel will never build for us specifically. You either have to be a huge company and one of their top 7 or so buyers or be their biggest customer base to get consideration. Gaming desktops would have to overtake mobile and server to do that.

As for developers, there is only so much they can do. Some things by their nature can only be done in certain ways. More than that, they have to learn over time to handle development on certain hardware better. The problem is that developers do not build for the PC and even when they do, it isn't for a specific processor. If they did, they could leverage the SDK's and tools for that architecture and do far more than they do now. Code optimization is done for the lowest common denominator, not the fastest and most powerful machines.

There isn't just one thing holding back performance and visual quality in the PC market. It has never been like that in the last two decades. It always comes down to multiple issues and legacy hardware or software concerns that prevent more innovation and increased performance and visual quality. The mass market drives the bus and right now that mass market is in consoles and chips designed around a performance per watt concept rather than one of pure performance.
 
Well said indeed, makes sense form A - Z.


I for one am happy that I finally got this 7700k stable at 5G ..WITH.. the RAM at 3866 and same CL as XMP3600, reproducable :D

Gives me some headroom in IPC for gaming, I dont do production with this machine apart from some VMware for fun.



For me, the next big step I am waiting for is getting rid off the slow interfaces, lanes, slots, and how the system is layed out in general.

There I hope we will also see some huge performance gains. I hope the next 5 years bring some big game changer there into the mass market.


Right now not only SSD's are overtaking themselves while being overtaken again by NVMe that is already knocking on the x4 door to open up to 8x or better more right away.

But where to with all that data ? across the PCH ??? well, rather NOT :D


The biggest joke in my world are the obsolete 1Gbit NICs and switches. As soon as 10Gbit switches come more down to earth with pricing I am adopting it...and it is already only a fraction of the speed we need and not

even roled out.
 
Fallout 4 UHD threw me a loop with 16GB of ram until I remembered I had disabled my swap file.
It even closed down explorer complaining lack of memory.

Since enabling the swapfile (on SSD), after an hour or so play total memory use is about 8GB + swapfile 12GB.
And no more borking windows.
It might be ok with 8GB or 12GB ram.
 
Well said indeed, makes sense form A - Z.


I for one am happy that I finally got this 7700k stable at 5G ..WITH.. the RAM at 3866 and same CL as XMP3600, reproducable :D

Gives me some headroom in IPC for gaming, I dont do production with this machine apart from some VMware for fun.

Nice.

For me, the next big step I am waiting for is getting rid off the slow interfaces, lanes, slots, and how the system is layed out in general.

Don't get your hopes up. The system architecture isn't changing significantly anytime soon. We'll see increases in PCIe lanes for sure and eventually PCIe 4.0 will be here. DMI 3.0 was obsolete the day it hit. We do get much of our faster I/O through the CPU and virtually all of it through the CPU on the HEDT side of things.

There I hope we will also see some huge performance gains. I hope the next 5 years bring some big game changer there into the mass market.

Six years ago no one ever through that we'd be picking up 3% performance per generation to this day. I'm sure most people through we'd have another Core 2 or Sandy Bridge by now and it hasn't happened yet. Despite the memory speed increases and platform improvements the CPU's have remained virtually stagnant. It isn't that Intel can't improve here, or even that a lack of competition has prevented it in the strictest sense. Intel has only had to focus on performance per watt. If Ryzen is good enough, it may force Intel to take a wattage hit on the desktop side and give us some genuinely faster CPU's.

Right now not only SSD's are overtaking themselves while being overtaken again by NVMe that is already knocking on the x4 door to open up to 8x or better more right away.

NVMe M.2 devices took off in a way no one in the motherboard industry expected. M.2 was an almost certain loser of a standard back when it was SATA only. With the proliferation of NVMe drives, its clearly a run away hit. The problem is that we do not have the infrastructure on the motherboard to keep adding NVMe capacity. It's chewing up PCIe lanes. It's a shit form factor in the desktop market too. I wish U.2 had caught on better. It would solve the problem of M.2's shit form factor taking up too much PCB real estate.

But where to with all that data ? across the PCH ??? well, rather NOT :D

As usual, I'm just about the first one to point out that DMI 3.0 is weaksauce and the connection between the PCH and CPU is lacking. However, this isn't as bad as I generally make it out to be. It's the kind of bottleneck that shows up most often in benchmarks and a lot less in the real world. It's tough to even envision a scenario where this is problematic without trying to max out the bandwidth through specific situations that aren't necessarily representational of anyone's actual usage behavior. Still, the limitation on going to three or more NVMe drives is very real but the benefits of doing this in a desktop RAID array in the real world are minimal at best. This is again largely a theoretical problem than an actual problem. That said, I think Intel was very short sighted with DMI 3.0 as it left no room for growth.

The biggest joke in my world are the obsolete 1Gbit NICs and switches. As soon as 10Gbit switches come more down to earth with pricing I am adopting it...and it is already only a fraction of the speed we need and not

Gigabit maybe obsolete but we hardly need 10GbE in the home. You can't get more than 1GbE speeds for internet in this country that I'm aware of. Even if you could, you often end up waiting on someone else's web infrastructure rather than your own. Rarely do you realize the true potential of GbE internet in home networks. Streaming 4K video doesn't require that much bandwidth. Online gaming doesn't require that much bandwidth. How often does the average Joe or even the enthusiast push multi-Gigabyte files around their local network? I'd wager it's a rare thing. So again, what the fuck is 10GbE for? Pushing that kind of bandwidth routinely in the home seems unlikely in 99% of the cases out there. People who can push it and feel its necessary have probably already invested in 10GbE infrastructure.

I get moving ahead with technology before we are actually constrained by older standards, but needing 10GbE in the home seems way off at this point.
 
The only use I can see for 10GbE is to try and shuttle around mass storage. You should be able to get ~1.1GB/s, if you needed it, but the likelihood of having something that fast on one system and needing it on another is pretty low, as the data in question would have to be stored on NVMe (or possibly a SATA RAID), which doesn't really fit the 'mass storage' requirements of typical home computing.

If it fits yours, then grab some NICs and a switch, I know a doctor that does kidney removal on the cheap!
 
The biggest reason to go to 10GbE over 1GbE is that they cost exactly the same to implement. The biggest con is that you need more PCIe lanes to truly provide enough bandwidth for it. With DMI and lane limitations that's just not realistic on mainstream motherboards. 10GbE is done here and there in the HEDT segment but very rarely.
 
The biggest reason to go to 10GbE over 1GbE is that they cost exactly the same to implement. The biggest con is that you need more PCIe lanes to truly provide enough bandwidth for it. With DMI and lane limitations that's just not realistic on mainstream motherboards. 10GbE is done here and there in the HEDT segment but very rarely.

On the boards, sure. But the switches? Man.

(might help if the Netgears and TPLinks of the world would make the jump...)
 
If you can't run your game on your RAMdisk, then you have too little RAM.
 
40Gbe is often cheaper than 10Gbe, thats how fucked up it is. In the fleabay market for personal use I wouldn't even bother looking at 10Gbe via RJ45, and no serious datacenter ever gave a shit about it because it pulls too much power per port. The overall fail is somewhat related to this upcoming 5Gbe/2.5Gbe nonsense.

Per your U.2 comment, would've been nice but $100/custom/unavailable cables for too long kinda killed that. It is pretty awesome in servers where everything is baked in a prebuilt chassis though. Meanwhile we got laptop leftovers but at least the performance is great.
 
40Gbe is often cheaper than 10Gbe, thats how fucked up it is. In the fleabay market for personal use I wouldn't even bother looking at 10Gbe via RJ45, and no serious datacenter ever gave a shit about it because it pulls too much power per port. The overall fail is somewhat related to this upcoming 5Gbe/2.5Gbe nonsense.

Per your U.2 comment, would've been nice but $100/custom/unavailable cables for too long kinda killed that. It is pretty awesome in servers where everything is baked in a prebuilt chassis though. Meanwhile we got laptop leftovers but at least the performance is great.

What killed U.2 is the fact that Intel is the only SSD manufacturer to support it in the consumer space.
 
If you can't run your game on your RAMdisk, then you have too little RAM.
Looking at the sizes of some games.. 50GB plus.. hmmm.

My own rule of thumb is about $300, or double the RAM I need. At this time I need 16GB, so I put in 32 for about $200. It's just so cheap and lasts for so long that it's not a place I ever cut corners. It's nice to have a large memory cache, and to never have to close programs because oh jeez oh my that memory is getting low.
 
Back
Top