LOL!![]()
Look, things are going to change, that is just the facts, computing and programming will change as well. However, if hardware does not lead the way, why bother, which is why hardware must lead the way.
I do not agree with that at all and that mindset appears to be extremely short sighted. Might as well just say that quantum computing is impossible because, you know, we do not have it yet.
Hardware always leads Software and You Cannot predict the future.
No doubt about it it will change. Everything changes. Just not in the way you suggest. Computer programming is logic, and what you are suggesting would require a logical fallacy to be true, which is impossible.
Logic isn't 'short-sighted'.
And yet, this is not based upon logic, at least in the way you want to be locked into it. That is ok, things will change and programming will change with it, not today but definitely tomorrow.
I was speaking "developmental" per discussion. Sure go ahead and run a billion iterations of minesweeper or whatever to saturate anything. It is irrelevant to the discussion at hand.We have plenty of software that will take all the hardware we can give it.
I was speaking "developmental" per discussion. Sure go ahead and run a billion iterations of minesweeper or whatever to saturate anything. It is irrelevant to the discussion at hand.
If C needs the output of B to run, and B needs the output of A to run, there is no way you can ever run these three at the same time.
LOL!![]()
Look, things are going to change, that is just the facts, computing and programming will change as well. However, if hardware does not lead the way, why bother, which is why hardware must lead the way.
Y'all can argue all you want. Benches don't lie. Core heavy CPUs from AMD are mopping the floor with Intel on everything other than gaming. And even that they are only behind about 5% in certain titles.
"Things" will change, but not in any meaningful manner that will suddenly turn games into embarrassingly parallel problems.
If you look at the tasks where a 12-core beats a faster clocked 8 core, there are almost entirely composed of obviously "embarrassingly parallel" problems.
These are problems where you can easily break up the task into smaller pieces, and dependencies between the pieces of the problem don't exist. Usually it's usually just chopping up your data set into small pieces for each thread to work on.
Video rendering(x265), Screen pixels are your data set, you break it into small chunks every thread can work on it's chunk independently. 3D Rendering (Cinebench AKA AMD bench of choice), again break the screen into chunks, and work independently on them, Image filtering, same...
Games are completely the opposite of this. It's a simulation based on player agency, so it doesn't have a data set to parcel out. It's always reacting in a dependent cascade.
Sure you can squeeze out some parallelism, which developers have done, but it isn't going to radically improve from here, even with more cores becoming the norm.
When you have a game with decent amount of parallelism you still get nailed by Amdahl's Law. The more cores you throw at, the less they matter, and the less the parallel portions matter, and the more the single threads of control rise dominate run (frame) time.
Today but not tomorrow. Things Wil change, our understanding will change and how we do things will change.
Not at this basic level, no. It's clear you don't understand the problem space enough to offer meaningful input.
It's clear I do understand better than you think, I am just more opened minded about it. You just want me to agree and be a yes man and that is not going to happen.
Honestly from a gaming perspective I think the largest change is going to be a larger emphasis on instruction sets and less so on thread count. As the number of cores goes up eventually you will reach a point where an instruction thread can’t be broken down any further and thread count ceases to matter. That number will change with the applications but you will always reach that same state. The next advances are going to start being specialized cores and instruction sets within the CPU’s we can shit all over nVidia’s RTX cores but when they are used they do show noticeable improvements. Workloads are becoming increasingly complex and specialized processors and/or cores are going to become the new norm.It's clear I do understand better than you think, I am just more opened minded about it. You just want me to agree and be a yes man and that is not going to happen.
Honestly from a gaming perspective I think the largest change is going to be a larger emphasis on instruction sets and less so on thread performance. As the number of cores goes up eventually you will reach a point where an instruction thread can’t be broken down any further and thread count ceases to matter. That number will change with the applications but you will always reach that same state. The next advances are going to start being specialized cores and instruction sets within the CPU’s we can shit all over nVidia’s RTX cores but when they are used they do show noticeable improvements. Workloads are becoming increasingly complex and specialized processors and/or cores are going to become the new norm.
Workloads are becoming increasingly complex and specialized processors and/or cores are going to become the new norm.
8 cores may be the magic number now but some of the new AI, Destructible Environment, Map technologies I have been seeing in development need 4+ cores on their own. So we would need to start seeing 16+ core consumer cores being the norm before they become something viable. Which is why a lot of developers are sorta excited for the various cloud computing gaming platforms that are launching. Stadia may be the first and currently a smidge “underwhelming” but there are still a lot of really cool things that are currently only viable with that sort of backend and with the amount of money major developers and publishers are throwing at it I expect it to be around for a while.I'm thinking that above 8 cores, I'd use the die space from die shrinks on just giving it MASSIVE amounts of L2 and L3 cache instead. Games seem to love that shit. The less often you have to go back to comparatively slow RAM...
It's clear I do understand better than you think, I am just more opened minded about it. You just want me to agree and be a yes man and that is not going to happen.
I'm thinking that above 8 cores, I'd use the die space from die shrinks on just giving it MASSIVE amounts of L2 and L3 cache instead. Games seem to love that shit. The less often you have to go back to comparatively slow RAM...
Just think about it logically. If one calculation depends on the output of the calculation that came just before it, you can never spread those two calculations over two separate cores and run them at the same same time. This is a limit of logic. No innovation in code or otherwise can solve this dilemma, unless someone invents a time machine.
I'm no programmer but am curious - decreasing the latency to each core via new process tech (e.g. stacking), in order to process each calculation, only passing the result from each core to a central core? Central core does clocking/timing (I think you call it scheduling) and 'traditional single thread' role to the software, other cores do the calculating, just sending the output. Wouldn't that save calculation time IF the calculation time is longer than the calc time + latency to send the commands to other cores? Or are games typically relying on many simultaneously executed, very simple calculations?
So really, core-core latency is what needs improvement if that would be possible but isn't currently?
Reason I ask as a hardware orientated type and not a programming type of geek is frames are in ms region. CPU core-core latency ping time is under 50ns for the best current designs. 100ns minimum round trip is 0.00010ms + calculation time.
p.s. sorry if I don't get your prior explanations but the beer isn't helping ;D
Thanks in advance.
Maybe? I don't know.
You might be able to make some efficiency gains by doing something like that, I don't know enough about that subject.
You are still going to have the problem that a vast majority of calculations in a game engine depend on the outputs of other calculations so they need to be sequential, and can't be run at the same time.
In some applications you may be able to do some sort of multicore overkill branch prediction, where you predict the likely outcomes of the next step and pre-calculate them using your many cores, but I think in most gaming situations there are too many potential outcomes for this to have much of an impact.
I'm no programmer but am curious - decreasing the latency to each core via new process tech (e.g. stacking), in order to process each calculation, only passing the result from each core to a central core? Central core does clocking/timing (I think you call it scheduling) and 'traditional single thread' role to the software, other cores do the calculating, just sending the output. Wouldn't that save calculation time IF the calculation time is longer than the calc time + latency to send the commands to other cores? Or are games typically relying on many simultaneously executed, very simple calculations?
So really, core-core latency is what needs improvement if that would be possible but isn't currently?
Reason I ask as a hardware orientated type and not a programming type of geek is frames are in ms region. CPU core-core latency ping time is under 50ns for the best current designs. 100ns minimum round trip is 0.00010ms + calculation time.
p.s. sorry if I don't get your prior explanations but the beer isn't helping ;D
Thanks in advance.
Where AI (really machine learning) can help is not only in finding code that may be broken up, but also how. I suspect that there's quite a bit of work being done in more complex games that's just not yet worth the effort to optimize for multithreading.
And we haven't really seen an example of a complex game where this is done well -- my best example of an optimized game engine is the id tech engine series, where the feel of the gameplay is focused on.
Battlefield, a game series that could really benefit from optimization, is the opposite. It's a laggy mess.
Note that UI responsiveness is certainly an ongoing topic. In the last year or so it was brought up that terminals tend to be much more responsive than GUIs still today, and there's really no reason for that except that it hasn't been an area of focus on desktop operating systems.
Future games may utilize more cores fully, but that will depend on creating more new types of work, likely in a few different types of games, than it will on better multi-threading to current [types of] games.
Not sure what battlefield you are playing, maybe turn off ray tracing garbage
On the 1080Ti in my sig, that's what's wrong!
Or perhaps DICE doesn't make game engines that are as responsive as what comes out of id software.
I'm torn on ID software's stuff.
When I played Doom and the Wolfenstein series I was very impressed with how the Tech engine was able to be smooth and pump out the framerates even at high settings.
I'm playing Wolfenstein Youngbloods right now after finishing Far Cry New Dawn, and I am starting to think it's just that they use lower polygon models or something. The visuals just aren't as impressive.
Thinking back to all of the recent Wolfenstein and Doom games, that pretty much appears to be par for the course. So I'm thinking they just make games with less intensive visuals, and then take credit for their engine being good.
I'm torn on ID software's stuff.
When I played Doom and the Wolfenstein series I was very impressed with how the Tech engine was able to be smooth and pump out the framerates even at high settings.
I'm playing Wolfenstein Youngbloods right now after finishing Far Cry New Dawn, and I am starting to think it's just that they use lower polygon models or something. The visuals just aren't as impressive.
Thinking back to all of the recent Wolfenstein and Doom games, that pretty much appears to be par for the course. So I'm thinking they just make games with less intensive visuals, and then take credit for their engine being good.
So you're basically saying Carmack doesn't know how to code game engines?
I don't think you know what responsive is.. You are talking about battlefield being a mess in multiplayer, but considering BF5 has higher tickrate and lower latency I really can't see how you can talk about ID being a benchmark of any sort other than being a good corridor shooter/single player game. ID Games didn't even develop the multiplayer for doom, but had to take over quite a while after release.I really can't speak to the technical abilities of the engines -- just that they manage to make them feel responsive, and consider them the benchmark.
It's probably a combination of careful resource management and fine tuning of the engines themselves.