Riddle me this: Better IPC?

Yeah there kind of is, as you have a 2080ti, (though I don't know what memory you have) not to mention the 5960x

I'd love to help if there was a formalized benchmark. But there isn't. So all we will get are a bunch of loosely collected data points. I spent my ~$3...that's enough lol.
 
Man, if I had the x5675 with me I would give it a try, since I have very nice ddr3 2400 cl8 RAM. It's nowhere near 5.0ghz tho.

I have a fairly substantial collection of CPU's on hand. I have Intel CPU's ranging from the Pentium D 965 Extreme Edition to Cascade Lake-X and almost everything in between. I don't have any Broadwell-E CPU's though. I badly degraded the 6950X ES I had. I have fewer AMD CPU's but I do have a Ryzen 5 1700X, Ryzen 5 2700X, Ryzen 5 3400G, Ryzen 7 3700X and a Ryzen 9 3900X. Unfortunately, or fortunately, I don't have anything earlier than that. I have a 3950X inbound though.

But, I won't go through and test this on anything that isn't already set up.
 
Ok, there is a lot to unpack here. I suspect for a lot of you this is TL:DR unless you like geeking out about this stuff.

This is precisely what I was talking about. You are using Super Pi to predict game performance. Whether you apply a ratio to the results or not is irrelevant.
The data disagrees with that assessment.
You are using it as a game benchmark in a round about and somewhat convoluted way.
That is a matter of opinion and does not invalidate the data.
I guess this makes some sense as almost no one seems to have this game and many aren't going to drop the $3 on it just to satiate someone else's curiosity.
Popularity of the game is irrelevant. There is a user base, it is not zero. May not be your cup of tea, nor most peoples, not pertinent to the discussion.
Ordinarily I'd tell you this is a bad idea because different processors have different architectures. Most of the time I'd be right and most of the time, you wouldn't want to use a third party tool to predict game performance because it simply doesn't work.
That's your strawman argument not mine. Your statement is about general games, my statement is about a specific game - and even that game is under a very specific condition as the game must be CPU limited for the ratio to work.

I suspect the technique I presented would work with many older, simulation based games, with both high CPU and high GPU loads. Games like FSX and car racing sims like Grand Prix Legends (circa 1998 although this code base became Iracing - Iracing is likely well multi-threaded so it likely wouldn't apply). Not making claims beyond TS12 at this point, making it clear these are suspicions at the moment.

Not making any claims nor is this thread about games like AAA titles, first person shooters, and most "mainstream" games you have experience with.

That said, I wonder about games like GTA and Fallout 4 where I've read about CPU being important due to the open world nature of those titles. Not making any claims here either, but am curious, TS12 is an open world game too.

I would still argue that benchmarking the actual game is the best way to go about determining its performance, but I understand no one has TS12. I had never even heard of it until this thread.

From a high level, I think we violently agree about benchmarking the actual application is best.

No one has TS12? I have TS12, I'm a someone. Again popularity does not invalidate the technique. There are lots of niche games out there, and a big problem for enthusiasts for those titles is they have zero representation in widely reported benchmarks of CPUs and GPUs. The only way to get a prediction for them is something like this if it is available. Or, they can buy a system with a good return policy and just return it if it falls short - not hardly ethical.

I had never even heard of it until this thread.
THIS is precisely why I busted your chops earlier in the thread about being so convinced of your position. This TS12 thing was an unknown to you and yet you were so sure how it was going to work out - even with some warning that this one was a bit different than norm. When you have holes in your data, get the data, then draw conclusions. Be tepid with claims, frankly something on the order of "based on my experience it think this isn't going to work, but I don't know anything about TS12, so let's find out" wouldn't have been bothersome at all.

I hope you remember the lesson -- it will serve you well. And I'm not innocent - I did something very similar to this in my youth in grad school. Had a relationship I was sure was true. Wrote an abstract, got a speaking slot in a conference, took data. The data didn't work. Took more data. The data didn't work. Came up with all this detailed theory about why the data was wrong. Two days before the conference I had jack squat. Then it hit me, the data was correct, my theory was bullshit. Wrote that conference presentation on the plane on the way to the conference about what I needed to have solved before I could do what was in the abstract. It was a big lesson. Professionally, that lesson has served me well.

This is a gross over simplification of things. Being old x86 code doesn't generally mean that there is anything correlative between them. A game engine, even a CPU limited one doesn't calculate Pi. I think the only reason this works at all is because Intel's processors have largely been stagnant in terms of single-threaded performance.

Simplification, yes. Over simplification is a matter of option. Data so far shows the model, while simple, is good enough.

Old x86 code simply means both code bases have access to the same instructions and neither has significant multi-threading. All of these CPUs are general purpose processors - they don't know if they are computing PI or pulling objects out of database to stuff into a TS12 scene. There is no "Pi" instruction in any of these CPUs.

I think the only reason this works at all is because Intel's processors have largely been stagnant in terms of single-threaded performance.

I really like this^^^^ sentence and I largely agree with it. But I don't like the sentence because I agree with it, it has to do with my response above about claims/thoughts/gut feelings without data.

The stagnation does likely help, but again, without data, who knows. Perhaps if I can get that Prescott to run the TS12 Bench 1 we can get a clue by spreading the single-threaded performance from today's stuff. Real data won't be available until progress is made on single-threaded performance - may never happen.

Even so, there may be differences between a 3770K and a 10980XE in this game because it is a game. Yet, both of these CPU's will clock about the same. Around 4.7GHz. An HEDT processor however, is generally less than ideal for gaming. There are increases in latency given their designs. The first HEDT processor not to suffer in this way is really the third generation Threadripper.

I think that is an oversimplification. All games are not the same as all programs are not the same.

As an aside, I side-stepped Sandy Bridge E because the memory bandwidth didn't help with general applications and I think because of low SPI scores - I don't remember if that is exactly true, but I know I skipped SBE and didn't regret it.

Run the TS12 Bench 1 on your HEDT - it had an impressive SPI score for being clocked at 4.7G.

Furthermore, using a third party application as a predictor of game performance will almost always require some sort of math and a lot of data to gather up front to be put to use. I'd still argue there is going to be a big margin of error for this as the relationships aren't likely to be 1:1.

Agree and disagree. Desperation is the mother of invention. One has to be careful to get reasonable margin of error, but it was possible here. I've found the accuracy has been about 2-3 FPS over several upgrades.

No doubt it is a lot of stinking work to do. For a general CPU/GPU reviewer, clearly not worth the time. For an enthusiast for the aforementioned game it was worth it and I'm a bit of a bench and performance jockey too - this is [H]ardOCP after all.

For example, as I said, I bet that if I ran TS12 on the 10980XE, it would run it like crap. Sure, at 4.7GHz it's no slouch but its slower than a 9900K at gaming. It's almost worth the $3 for me to find that out. I could be wrong, but it would be an interesting test. Of course, the 10980XE and HEDT processors for a game like this would still be an outlier at best.

Get the data, find out. $2.50 barely buys a cup of coffee these days.

You keep using that term outlier, and that is a big point of this whole topic, only "outlier" as you used it applies to the 9900K and likely 9700K as well. A program doesn't get the boost that "it should have" in these newer processors. Historically, this is uncommon as CPUs have gotten faster. For the most part, everybody enjoyed the benefits of the upgrade. You use the term outlier - I use the term YMMV.

Historically there is another great outlier out there as well, Quake. Recall how it ran very much better on Intel 486 than on Cyrix even though the Cyrix ran a bunch of stuff faster than Intel at lower cost. It was due to an architecture feature of the 486 FPU that John Carmack took advantage of.

You mentioned trying to get a baseline for Ryzen processors on TS12 so you could use Super Pi to predict their results. This will may have a higher margin of error than Intel CPU's generally will due ot their configuration and architectural changes being much more meaningful than Intel's have been. For example, Zen and Zen+ CPU's may correlate well enough together but Zen 2 is a different beast. While Zen 2 is a descendant of Zen and Zen +, allot has changed between them. Cache design, the Infinity Fabric and so on.

With the edit above, I agree.

However, as I found out the hard way, Threadripper CPU's (1st and 2nd generation) are kind of bad at gaming. While their Super Pi scores will be similar to that of their mainstream counterparts, they suck at gaming due to their NUMA architecture and the amount of latency introduced when crossing CCX complexes inside the CPU.

Genuinely curious as to why it was the "hard way".

This is something we can absolutely agree on. Taking this concept further, any third party benchmark, such as 3D Mark or Heaven aren't actually games themselves. Nor are they accurate predictors of what you can expect in any game beyond the vaguest connection. Basically, if you get a really high 3D Mark score, chances are you can run any given game well. But as I said before, you can't say 11,000 3D Marks equals 120FPS in Destiny 2. The correlation between the two things just isn't there. Such benchmarks are often better as stress testers than they are anything else.

Pretty much agree here too. I've never found 3D Mark to be of much use even as a stress tester. I think Heaven has its uses for looking at differences in performance between an existing GPU and and GPU under upgrade consideration, such as GTX980 to 1070TI, for example. An irritating need for this is many current GPU reviews don't include more than one generation back in results -- this in particular drives me nuts.

Your comments regarding 3D Marks and Destiny 2 appear to be based on data - no issue with those at all.


A few things here. Yes, Super Pi is all that you said it is and generally meaningless as it relates to games. This has been my point from the beginning. The fact that you can derive anything from it in the realm of any game, CPU limited or not is impressive.

Thank you and I'll accept the complement.

I still think it's a poor predictor even for TS12 due to my examples above.

Again, the data disagrees with you.

The other thing I'd like to mention, is the orignal Crysis. Its actually somewhat similar. Although, lightly multi-threaded, it's no better on a 9900K than it is a 2600K or even older processors. When Crysis came out, almost all CPU's were single core. Dual core CPU's were relatively new then. The Crytek engines have since been improved dramatically, but Crysis one can still bring a system to its knees today if you crank up the settings and run it at 4K.

Yeah, but can your 9900K run Crysis? :) God, that question got wore out. I remember Crysis, who could forget how it emasculated machines. Why, it was another, wait for it, outlier. Albeit, it was a popular one.

You mentioned bringing GPU's into the equation and complicating things. While its true that the GPU is more important than the CPU as it relates to gaming, the CPU is still critical.

In things like TS12, FSX, and Grand Prix Legends, both CPU and GPU were equally important if you wanted to run the game at sufficient FPS and have it look good. CPU/GPU balance is highly dependent upon the game and even on user preferences if a user doesn't care about resolution or AA, for example.

However, its when you turn down settings and take the GPU out of the equation where the difference shows. On the opposite end of the spectrum, at 4K, your CPU is more important than you'd think as low minimums are something you'll both see and feel quite often.

And here is where we have an outlier nested inside of an outlier. I understand the concept of minimums. Here is the kicker, in TS12 has to be run with VSYNC to have smooth motion and physics. So that's either full screen refresh rate or 1/2 refresh rate. If this stupid game ever drops below that VSYNC rate the motion and physics turn into a jitterfest -- much more so than from the dropped frame or two. If you don't have those minimums, it can be butter smooth, which for me is a requirement as the jerkiness destroys the suspension of disbelief. It's probably why the requirements for this bloody thing are so high.

In the successor to TS12, TANE, which can run the content from TS12, the CPU loading problem is solved. The engine got a complete re-write, is multi-threaded and my 2600K doesn't break a sweat on it. However, if the video card stumbles, the same jerkfest returns when it can't keep up with VSYNC. From my estimates, there are no single video cards that can fix this problem with acceptable image quality. Hence I'm looking for a second 1070TI following that alluring promise of SLI "doubling performance". A 100% boost in video from what I have now would do it. And no, I'm not using SPI to judge the GPU side of things. And I know SLI is a crap shoot, even with well known titles.

You can bring Heaven into the equation all you want, but I'm not sure it will help you in anyway where it comes to TS12. However, if you think it will, I have LOTS of Heaven results from a lot of processors.

I don't know either. I'd prefer to settle the CPU side first and then look at GPU. For example, if I take my TS12 setting for reasonable CPU load, and I crank the AA settings up enough to max out the GPU will a Heaven benchmark between that existing GPU and an upgraded GPU track the improvement of a GPU limited TS12?

The estimate doesn't have to be exact, these are ball park estimates to judge upgrades. This generally means the estimate has to show the upgrade is really, really worth it, or the upgrade has to be cheap.

-Mike
 
Oh man.. stop with the condescending bs, please. Show some respect - having a difference of opinion does not mean you have to be a prick about it.

Re gpu limited TS12 - It is VERY clear that this is not in any way GPU limited with the current GPUs (especially at 1080p). Period. If the bloody game used graphics memory wisely, it'd probably stop decompressing stuff with the cpu, use proper (hardware supported) texture compression, and offload it to graphics memory, but alas, back when this game was in its prime 1gb (or less) cards were the norm, 3gb high end.

In contrast the 1080ti has 11gb, and most cards these days have 8gb worth of memory with outrageous amounts of bandwidth.
 
Last edited:
Popularity of the game is irrelevant. There is a user base, it is not zero. May not be your cup of tea, nor most peoples, not pertinent to the discussion.

Actually, the popularity of the game is relevant. It's lack of popularity is the only reason why you would need or even want a convoluted means of determining performance rather than simply benchmarking the actual game.

That's your strawman argument not mine. Your statement is about general games, my statement is about a specific game - and even that game is under a very specific condition as the game must be CPU limited for the ratio to work.

Nope. Read it again. I'm saying that there are potential differences in THIS game with different architectures. Namely, HEDT processors. Two words you missed. Potential and this game. Yes, the basis for my prediction is based on gaming on these processors in general. Games are negatively impacted by some of these architectural and platform differences.

I suspect the technique I presented would work with many older, simulation based games, with both high CPU and high GPU loads. Games like FSX and car racing sims like Grand Prix Legends (circa 1998 although this code base became Iracing - Iracing is likely well multi-threaded so it likely wouldn't apply). Not making claims beyond TS12 at this point, making it clear these are suspicions at the moment.

As I said, Crysis is one game that follows this behavior. It largely performs the same on any Intel CPU made since the Core 2 Duo. Clock speed being the biggest impact to its performance. It's multi-threaded, but only in the most rudimentary and half-assed way.

Not making any claims nor is this thread about games like AAA titles, first person shooters, and most "mainstream" games you have experience with.

I'm not saying you did. I'm simply saying that this is the reason why I would ordinarily discount this idea that something like Super Pi could be a predictor of a game. Any game. Generally, games work pretty much the same. The same things impact them even if you can't use one game with a given engine to predict how a different game will perform using the same engine. That's a separate topic for another time, but the point is, I theorized that TS12 would perform better on a 9900K than a 2600K and that wasn't the case.

That said, I wonder about games like GTA and Fallout 4 where I've read about CPU being important due to the open world nature of those titles. Not making any claims here either, but am curious, TS12 is an open world game too.

I don't know about Fallout 4. However, GTA IV benefits from higher frequency CPU's more than anything, but it is multi-threaded well enough to leverage something like a 9900K fairly well. It does not however, benefit from increased core counts beyond that from what I can tell.

113236.png




From a high level, I think we violently agree about benchmarking the actual application is best.

And this is my point. Why not go direct to the source instead of mucking around with Super Pi? That way, you could be sure of how something would perform without having to gather additional data points and then run the numbers?

No one has TS12? I have TS12, I'm a someone. Again popularity does not invalidate the technique. There are lots of niche games out there, and a big problem for enthusiasts for those titles is they have zero representation in widely reported benchmarks of CPUs and GPUs. The only way to get a prediction for them is something like this if it is available. Or, they can buy a system with a good return policy and just return it if it falls short - not hardly ethical.

Relax. I mean that the game isn't that popular. The lack of popularity is the only reason I can see for the technique.

THIS is precisely why I busted your chops earlier in the thread about being so convinced of your position. Snip.................

Again, I made a prediction about the results based on a ton of experience testing CPU's and games. I literally do it for a living. If I'm wrong, I'm wrong. I'm not exactly bothered by it. That's why I had no trouble admitting it. Its actually interesting to be proved wrong in this case.

I hope you remember the lesson -- it will serve you well. And I'm not innocent - I did something very similar to this in my youth in grad school. Had a relationship I was sure was true. Wrote an abstract, got a speaking slot in a conference, took data. The data didn't work. Took more data. The data didn't work. Came up with all this detailed theory about why the data was wrong. Two days before the conference I had jack squat. Then it hit me, the data was correct, my theory was bullshit. Wrote that conference presentation on the plane on the way to the conference about what I needed to have solved before I could do what was in the abstract. It was a big lesson. Professionally, that lesson has served me well.

Again, there is nothing wrong with making a prediction based on experience. It's when someone isn't willing to accept the data, nor willing to put their theory to the test. While I hadn't been willing to spend the money on the game, I was fine with someone proving me wrong. I didn't expect it, but its not a problem.

Simplification, yes. Over simplification is a matter of option. Data so far shows the model, while simple, is good enough.

I never said it wasn't. I simply said that I don't think the simple model you have will work all the time. I haven't tested that yet, but I plan to. I will take it a step further, because I don't think that just eyeballing the FPS is good enough.

Old x86 code simply means both code bases have access to the same instructions and neither has significant multi-threading. All of these CPUs are general purpose processors - they don't know if they are computing PI or pulling objects out of database to stuff into a TS12 scene. There is no "Pi" instruction in any of these CPUs.

Fair enough.

I really like this^^^^ sentence and I largely agree with it. But I don't like the sentence because I agree with it, it has to do with my response above about claims/thoughts/gut feelings without data.

The stagnation does likely help, but again, without data, who knows. Perhaps if I can get that Prescott to run the TS12 Bench 1 we can get a clue by spreading the single-threaded performance from today's stuff. Real data won't be available until progress is made on single-threaded performance - may never happen.

Single-threaded performance has improved. But it depends on clocks, instructions, and a whole bunch of other things. This is simply a case where neither Super Pi, nor TS12 seem to benefit from the improvements. As you said, it's legacy code. Bulldozer at 5.0GHz is a dog. It's far worse than your 2600K.

I think that is an oversimplification. All games are not the same as all programs are not the same.

I never suggested they were. However, games do generally benefit from the same things even if its to varying degrees. Games benefit from more cache, higher clock speeds, etc. Even games with the same game engine may not behave the same. That is, you can't assume you'll get the same frame rates in Mass Effect Andromeda as you would in Battlefield 5, despite using the same engine. They are built and implemented differently because their game play is inherently different.

As an aside, I side-stepped Sandy Bridge E because the memory bandwidth didn't help with general applications and I think because of low SPI scores - I don't remember if that is exactly true, but I know I skipped SBE and didn't regret it.

Memory bandwidth by itself doesn't generally benefit most desktop applications.

Run the TS12 Bench 1 on your HEDT - it had an impressive SPI score for being clocked at 4.7G.

That's actually the plan.

Agree and disagree. Desperation is the mother of invention. One has to be careful to get reasonable margin of error, but it was possible here. I've found the accuracy has been about 2-3 FPS over several upgrades.

No doubt it is a lot of stinking work to do. For a general CPU/GPU reviewer, clearly not worth the time. For an enthusiast for the aforementioned game it was worth it and I'm a bit of a bench and performance jockey too - this is [H]ardOCP after all.

Yeah, I wouldn't use this in my reviews.


Get the data, find out. $2.50 barely buys a cup of coffee these days.

You keep using that term outlier, and that is a big point of this whole topic, only "outlier" as you used it applies to the 9900K and likely 9700K as well. A program doesn't get the boost that "it should have" in these newer processors. Historically, this is uncommon as CPUs have gotten faster. For the most part, everybody enjoyed the benefits of the upgrade. You use the term outlier - I use the term YMMV.

Yes, it is an outlier. It's not normal for a game to benefit so little from the various architectural enhancements made over the last ten years.

Historically there is another great outlier out there as well, Quake. Recall how it ran very much better on Intel 486 than on Cyrix even though the Cyrix ran a bunch of stuff faster than Intel at lower cost. It was due to an architecture feature of the 486 FPU that John Carmack took advantage of.

That was actually during the Pentium and Cyrix 6x86 era. But your right. The Cyrix CPU's had lower clocks, ran hot as hell and matched or beat Intel at a lot of things. However, the FPU on the Pentium was vastly superior and since Quake's engine made use of it, the difference between the two was huge. I had a 6x86 PR200+ for awhile and it ran Quake I like a Pentium 90.


Genuinely curious as to why it was the "hard way".

It's a long story. Essentially, I believed the conventional wisdom that your GPU mattered far more than your GPU at higher resolution. I was gaming at 4K exclusively at the time. I had been running a Core i7 5960X and I had a Threadripper 2920X on hand. I figured it would be a side grade for gaming and better in everything else. At the same time, I went from dual GTX 1080 Ti's in SLI to a single overclocked RTX 2080 Ti. After the upgrade, Destiny 2 ran badly in places. I suffered FPS drops I could see down into the mid-40's when before, it never dropped below 60FPS with V-Sync on. I figured it was the change in video cards primarily. Destiny 2 actually likes SLI. It's a bit of an outlier for that these days.

I was working on a friends machine and fired up Destiny 2 on it. He had a 3770K @ stock and dual 980 Ti's in SLI. He had the same monitor (Samsung KS8500 49" 4K TV) and his game ran faster than mine without dropping below 60FPS. So he had an archaic processor and video cards that were two generations back. This didn't track right. I talked to other people with worse systems who had better performance than I got.

Long story short, when I started doing the testing for my 3900X review against a Core i9 9900K, I found out that Destiny 2 didn't like AMD processors very much and that Threadripper was especially bad. Despite the fact that the average frame rates on all the tested processors were similar, the experience they provided was not. I got 26FPS @ 4K with a 4.2GHz all core overclock and 36FPS with PBO on which yielded boost clocks on a single core up to 4.3GHz. Meanwhile the 9900K scored a result of 54FPS stock and 56FPS @ 5.0GHZ. When you look at the frame times, those drops I saw to 45FPS were just what I saw on the Threadripper. It actually dropped below that allot of the time. Meanwhile the Intel 9900K showed a minimum frame rate of 56FPS but it did it so fast and so infrequently that I never saw it.

Basically, I learned that your CPU matters more than you think even when GPU bound. Yes, the GPU is more important, but the CPU is still a big deal. I've tested a bunch of other games, and its the same story much of the time. You need to look at the frame times and frame rates to see what's really going on. Even if your Super Pi model can predict frame rates, it can't tell the whole story. I would expect a modern processor with more modern I/O, faster RAM etc. to run TS12 better, even if the average frame rate is the same.

Pretty much agree here too. I've never found 3D Mark to be of much use even as a stress tester. I think Heaven has its uses for looking at differences in performance between an existing GPU and and GPU under upgrade consideration, such as GTX980 to 1070TI, for example. An irritating need for this is many current GPU reviews don't include more than one generation back in results -- this in particular drives me nuts.

As a reviewer, I can tell you that its a pragmatic decision not to go too far back. The problem is that we get very little time in these articles to test with. From the time we receive a product to the time it launches could be anywhere from 3 to 10 days. That's not much time to write a massive article, gather data, troubleshoot any product issues, and so on. We wouldn't really want to test with anything older than a GTX 1080 Ti. It's just not really all that relevant. You aren't cross shopping older cards than that with newer cards. I know people want to know how their old hardware fares in modern games, but that just isn't feasible. Testing procedures and standards change. Sometimes we no longer have the old hardware, and retesting it all becomes something we just don't have time for.

Your comments regarding 3D Marks and Destiny 2 appear to be based on data - no issue with those at all.

They are, but I've said the same thing about 3D Mark for the last 15 years. If HardOCP's main page were up, you could go back and see that. Although, I'm sure some old ass forum posts where I said that are still around. I said it for the same reason I say it today. 3D Mark doesn't use a game engine. It hasn't for over a decade and a half now. Even when it did, it was a specific engine and didn't translate to anything else. After they dropped the game engine, variables that wouldn't impact your frame rates impacted 3D Mark scores considerably. One of the most telling cases was between the 8800GTX and the 2900XT. The latter got much higher scores in 3D Mark, but it was worse than the 8800GTX at everything.

Yeah, but can your 9900K run Crysis? :) God, that question got wore out. I remember Crysis, who could forget how it emasculated machines. Why, it was another, wait for it, outlier. Albeit, it was a popular one.

Yes, but my point is, that it doesn't really do it better than an older processor like a 2500K clocked the same. That's because of how its coded.


In things like TS12, FSX, and Grand Prix Legends, both CPU and GPU were equally important if you wanted to run the game at sufficient FPS and have it look good. CPU/GPU balance is highly dependent upon the game and even on user preferences if a user doesn't care about resolution or AA, for example.

That's my point talking about CPU's where people believe you are more GPU limited. While you are, a good CPU is still important as it directly impacts the gaming experience. Frame rates, frametimes etc. are all impacted by that.

And here is where we have an outlier nested inside of an outlier. I understand the concept of minimums. Here is the kicker, in TS12 has to be run with VSYNC to have smooth motion and physics. So that's either full screen refresh rate or 1/2 refresh rate. If this stupid game ever drops below that VSYNC rate the motion and physics turn into a jitterfest -- much more so than from the dropped frame or two. If you don't have those minimums, it can be butter smooth, which for me is a requirement as the jerkiness destroys the suspension of disbelief. It's probably why the requirements for this bloody thing are so high.

If your running V-Sync, your FPS will be at whatever your refresh rate is. Obviously, if it drops below 60FPS lets say, then it will get cut in half. It sounds like the physics are tied to frame rate. If so, I'm not sure your data means anything because V-Sync screws with the results. How would G-Sync impact this? I have a 120Hz monitor. I have a 4K 60Hz FreeSync monitor as well. I can have 60FPS without having to get my FPS cut in half if I drop under 60FPS.

In the successor to TS12, TANE, which can run the content from TS12, the CPU loading problem is solved. The engine got a complete re-write, is multi-threaded and my 2600K doesn't break a sweat on it. However, if the video card stumbles, the same jerkfest returns when it can't keep up with VSYNC. From my estimates, there are no single video cards that can fix this problem with acceptable image quality. Hence I'm looking for a second 1070TI following that alluring promise of SLI "doubling performance". A 100% boost in video from what I have now would do it. And no, I'm not using SPI to judge the GPU side of things. And I know SLI is a crap shoot, even with well known titles.

I saw there were considerably newer versions of TS such as TS20. Why not use one of those?

I don't know either. I'd prefer to settle the CPU side first and then look at GPU. For example, if I take my TS12 setting for reasonable CPU load, and I crank the AA settings up enough to max out the GPU will a Heaven benchmark between that existing GPU and an upgraded GPU track the improvement of a GPU limited TS12?[/QUOTE[

I don't guess I get this approach. Game runs like shit? (34FPS is shit) then I throw money at it. I get the fastest video card and CPU I can afford, so long as it provides the performance I'm looking for. I didn't buy a 9980XE at $2,000 because it wasn't going to do anything for me in games. Of course, the games I play and at the res I play at, I need all the GPU power I can get. So I just get the best one at the time. I used to buy GPU's in pairs when SLI was worth a damn.

The estimate doesn't have to be exact, these are ball park estimates to judge upgrades. This generally means the estimate has to show the upgrade is really, really worth it, or the upgrade has to be cheap.

That's perfectly reasonable.
 
Oh wow.

Okay ... lets be honest here. Are we posting here to pass time/argue or figure something out?

Personally I think going into the future, if you're still playing TS12 you're going to need to bench the actual game to know if newer and newer CPUs will beat your Sandy Bridge.

But honestly, Im messing around in it now that I own the game, maxing those settings don't seems to add a whole lot to game... so would a conclusion even matter?
 
Eh I think I'm just trying to be helpful, and somewhat scientific about TS12 results- It's interesting to mess around with from a personal perspective.. but really it's just an interesting outlier to explore for me. Unknown to most here, I used to review games.
 
  • Like
Reactions: Dan_D
like this
Oh man.. stop with the condescending bs, please. Show some respect - having a difference of opinion does not mean you have to be a prick about it.

Re gpu limited TS12 - It is VERY clear that this is not in any way GPU limited with the current GPUs (especially at 1080p). Period. If the bloody game used graphics memory wisely, it'd probably stop decompressing stuff with the cpu, use proper (hardware supported) texture compression, and offload it to graphics memory, but alas, back when this game was in its prime 1gb (or less) cards were the norm, 3gb high end.

In contrast the 1080ti has 11gb, and most cards these days have 8gb worth of memory with outrageous amounts of bandwidth.

Having a difference of opinion is not being a prick about it. What is that old line? "They welcome all points of view until they discover there actually are other points of view".

What you call condescension is what I call the scientific method. Scientific method is not about feelz and can appear cold to some not used to it - it is not for the faint of heart. Dan doesn't seem to have a problem with it. Perhaps it would be better for Dan and I to take our debate to PMs as we are the only two significantly talking about that stuff anyway.

I have news for you, both TS12 and TANE can bring a 2080TI to its knees (sub 20FPS) at high settings with other content like Mojave Division. We haven't even dipped our toe into the water for that. On that same point, I'm almost 100% certain that TS20 will do the same thing. It took those guys *20* years to update the game engine the first time. TS20 is more about DLC and renting SW.

What the thread topic has moved to is the actual motivation for the original question. I'm some dude. I have a game I'm passionate about that just doesn't run good easily. It isn't popular enough to attract the normal stable of reviewers. How do I take what is available to reviewers and apply it to this game and not have it be bullshit, before I make my investment? In addition to the game selection, how do I match my video card to that review data when they used a different system? Given some of the data shown here, is a review of video card on an Intel system with a selection of games comparable to that same video card on a Ryzen platform -- the problem is actually getting worse.

With respect to older games, there are a lot of popular enough ones out there and the upgrade paths today are clear as mud. In a way it is almost a blessing if one wants to play something like Overwatch based on that "blown away" thread over in AMD processors because the upgrade choice is easy. I get a huge boost in Overwatch and everything else is a wash. https://hardforum.com/threads/upgraded-from-2500k-to-3700x-mind-blown.1988627/

While my forum id describes "newbie", I've been lurking and seldom posting here since before 2000. It was my understanding of [H]ardOCP that the governing philosophy was maximum verified performance at reasonable cost. Maximum playable settings. My TS12 benching methods are derived precisely to achieve maximum playable settings and to guide HW decision. Given that [H]ardOCP is gone, if this philosophy goes way from [H]ardform, its value, at least in my opinion is greatly diminished. Kyle didn't get the reputation and respect he has by being "nice" all the time.

I think it best to leave the thread to reported results to those interested, some examinations of how the ratios hold and I'll take the bigger discussion I'm having with Dan to PMs.

Cheers
-Mike
 
Last edited:
I welcome all points of view, I do not necessarily agree with them. In my case, silence does not equal agreement.
 
  • Like
Reactions: Dan_D
like this
Having a difference of opinion is not being a prick about it. What is that old line? "They welcome all points of view until they discover there actually are other points of view".

What you call condescension is what I call the scientific method. Scientific method is not about feelz and can appear cold to some not used to it - it is not for the faint of heart. Dan doesn't seem to have a problem with it. Perhaps it would be better for Dan and I to take our debate to PMs as we are the only two significantly talking about that stuff anyway.

Oh no, I do have a problem with the condescending tone. Let me be clear on that front. I didn't want to drag the thread off topic and as a result, I've simply tempered my responses to it.

I have news for you, both TS12 and TANE can bring a 2080TI to its knees (sub 20FPS) at high settings with other content like Mojave Division. We haven't even dipped our toe into the water for that. On that same point, I'm almost 100% certain that TS20 will do the same thing. It took those guys *20* years to update the game engine the first time. TS20 is more about DLC and renting SW.

I was simply asking because it would seem probable that a newer version of the software would be able to leverage newer hardware.

What the thread topic has moved to is the actual motivation for the original question. I'm some dude. I have a game I'm passionate about that just doesn't run good easily. It isn't popular enough to attract the normal stable of reviewers. How do I take what is available to reviewers and apply it to this game and not have it be bullshit, before I make my investment? In addition to the game selection, how do I match my video card to that review data when they used a different system? Given some of the data shown here, is a review of video card on an Intel system with a selection of games comparable to that same video card on a Ryzen platform -- the problem is actually getting worse.

I only questioned the motivation behind why you used the Super Pi method as it's not the best way to determine performance in the application in question. The rest, I understood as it was implied.

With respect to older games, there are a lot of popular enough ones out there and the upgrade paths today are clear as mud. In a way it is almost a blessing if one wants to play something like Overwatch based on that "blown away" thread over in AMD processors because the upgrade choice is easy. I get a huge boost in Overwatch and everything else is a wash. https://hardforum.com/threads/upgraded-from-2500k-to-3700x-mind-blown.1988627/

Well, there is a way to determine an upgrade path. The older reviews generally still exist on any sites still in operation. So you can track the improvements in graphics cards as far back as you like as comparing current gen to last gen is something that always gets done. There is also a lot of crossover as both AMD and NVIDIA like to rebrand older cards and sell them as newer ones. So, those get reviewed against their older versions and newer cards. The software gets changed out, and that's the only problem. However, newer GPU's will always run games better than older ones with rare exceptions. For example, a 9800GX2 was not an upgrade from 8800GTX SLI. But, those comparisons were there for people to see.

While my forum id describes "newbie", I've been lurking and seldom posting here since before 2000. It was my understanding of [H]ardOCP that the governing philosophy was maximum verified performance at reasonable cost. Maximum playable settings. My TS12 benching methods are derived precisely to achieve maximum playable settings and to guide HW decision. Given that [H]ardOCP is gone, if this philosophy goes way from [H]ardform, its value, at least in my opinion is greatly diminished. Kyle didn't get the reputation and respect he has by being "nice" all the time.

Not exactly. While overclocking was initially about getting more performance for less money was the reason for the sites origins, the articles on the site did not always cater to that mindset. Kyle, Paul, Brent and myself all gave awards to plenty of things that weren't cost effective, simply because they were the best on the market at the time or whatever. We continue the same line of thinking and perspective at TheFPSReview.com. It's all the same people, minus Kyle.

For example, even though the 10980XE is expensive, it still has value in certain cases. It's a good part, but I didn't grant it an award because there are too many qualifiers on it. The 3900X is expensive, but it did receive an award because it was innovative and an excellent performer. While at the top of the stack (at the time), its still a good bang for your buck solution as it gives you HEDT levels of performance without the drawbacks of a 2950X and X399. In other words, we have always looked at hardware from multiple perspectives and applied analysis to their benefits. Price / Performance is important, but it's not and never has been the sole focus of the site. That goes for HardOCP as well as TheFPSReview.

I think it best to leave the thread to reported results to those interested, some examinations of how the ratios hold and I'll take the bigger discussion I'm having with Dan to PMs.

I'm not sure what else there is to discuss beyond results. I've made all the statements I need to concerning this subject. I don't have a Threadripper of any kind, so I can't test my theory on those. I will get around to it at some point on the 10980XE and the 9900K. I will use NVIDIA's Frameview to capture performance data as it will provide far more detail about what's going on. I would invite you to do the same.
 
But honestly, Im messing around in it now that I own the game, maxing those settings don't seems to add a whole lot to game... so would a conclusion even matter?

The only reason the Trainz franchise still exists is user created content. Literally over the decades content creators have upped the game significantly and the top end content drives up the system requirements. Rail-sim.com.uk's Settle and Carlisle goes back a decade and that content is very prototypical, very complete, and very demanding on the sim. JointedRail came on the scene and raised the bar content so much so their Mojave Subdivision was included in TS12. Approach Medium's stuff is incredible:


If one goes back through the releases, and there have been a lot of them, Trainz ... 04, 06 ... 09, 10, TS12, it has been the content that has driven improvements and the need for increasingly higher spec'ed machines, not the game engine. Somewhere along the way the game got some multi-threading where they spun off the database on its own thread. Didn't really do much for performance. TANE was a game changer and it's backwards compatibility, while challenged, is impressive as the new engine is orders of magnitude faster - if it ran DX9 I'd be very happy with it as I think the DX11 renderer has quality issues.

In the nice content like Mojave Subdivision the settings make a very big difference. They also make a big difference when dynamics are introduced. That first benchmark is literally a simple quick and dirty to get results quickly and easily.

-Mike
 
Here is the data off the 5.0GHz Core i9 9900K. Full system specs are in my signature. I'll try it on the 10980XE shortly.

upload_2019-12-6_10-52-8.png


I set this up precisely as instructed. I can't say too much about the data as I have nothing to compare it to.
 
Oh no, I do have a problem with the condescending tone. Let me be clear on that front. I didn't want to drag the thread off topic and as a result, I've simply tempered my responses to it.

I was simply asking because it would seem probable that a newer version of the software would be able to leverage newer hardware.

I only questioned the motivation behind why you used the Super Pi method as it's not the best way to determine performance in the application in question. The rest, I understood as it was implied.

Tone is what it is and is not meant to be personal. It is argument based on data and tempering what one feels is going on. Data is the evidence indicates Super Pi works here for one of the corner cases in the game.

This is NV3Games/Auran, newer versions are almost always all sizzle and no steak. TANE provided a good bump but its "saddled" in DX11. TS20 isn't much more than TANE. In a perfect world, I run TS12 today at 50FPS - it would look better than TANE, but that would take the mythical 10G 2600K and a mythical memory boost as well. Bottom line, newer version won't help.

I've explained the Super Pi motivation before and I'll give it one more shot and then I'll just agree to disagree. I noticed that when the video card was not pegged at 100% and the frame rate dropped below 30 or 60 FPS that changes in the frame rate would roughly track changes to CPU clock rate. Somewhere along the line I noticed that changes to SPI results changed in a very similar way to changes in CPU clock rate - this was back on a P4 Prescott that would run 3.8G. When I compared TS09 changes to SPI result changes, they tracked. When I looked at C2D upgrades and followed through on those, the results tracked. The results have tracked all the way to the 9900K now. That's all there is too it. That AMD result is interesting because it's the first one that hasn't quite tracked in 10 years.

This only one small, but very necessary part to get the best performance out of this application. It is used to judge if CPU upgrades may work and what reasonable settings such as draw distances may work. Once the settings are dialed in here, one has to move upping the graphics AA settings, quality, and resolution. And there is a bit of back and forth, i.e. longer draw distances impact both CPU and GPU, so while the CPU is okay at say, 10000M, the graphics card chokes, so you knock that setting down to 8000M. Rinse and repeat.

Thus, Super Pi part is only one tiny part of the equation. This isn't an easy game to tweak. I haven't run into any others that are this bad, but FSX comes close. It is completely unlike a first person shooter.

I'm not sure what else there is to discuss beyond results. I've made all the statements I need to concerning this subject. I don't have a Threadripper of any kind, so I can't test my theory on those. I will get around to it at some point on the 10980XE and the 9900K. I will use NVIDIA's Frameview to capture performance data as it will provide far more detail about what's going on. I would invite you to do the same.

I've never used Frameview, looks interesting and I'll try it when I get some time. I hope its less clumsy than using FRAPS and has better than 1 FPS resolution. Could be very useful when I try to add SLI to this mess.

-Mike
 
Here is the data off the 5.0GHz Core i9 9900K. Full system specs are in my signature. I'll try it on the 10980XE shortly.

View attachment 205054

I set this up precisely as instructed. I can't say too much about the data as I have nothing to compare it to.

I'm not sure how to interpret that, but I think what you want is the display average only you don't want to include the initial high frame rates right after the load. Perhaps if you just let it run for a bit.

The frame rates I've used either come from FRAPS or the OSD of MSI Afterburner. I believe there is way for TS12 to display this as well - I'll have to dig for it. There is also a graphic profiler built into TS12, I'll have to find that as well.

-Mike
 
I'm not sure how to interpret that, but I think what you want is the display average only you don't want to include the initial high frame rates right after the load. Perhaps if you just let it run for a bit.

The frame rates I've used either come from FRAPS or the OSD of MSI Afterburner. I believe there is way for TS12 to display this as well - I'll have to dig for it. There is also a graphic profiler built into TS12, I'll have to find that as well.

-Mike

The average frame rates were actually run over a course of five minutes after the initial load. It took me forever just to figure out how to get the damn train moving so that I could record the data. I didn't want to do it while the train was still.

As I said, I suspect that V-Sync may be the real cause of the low performance. Since the game doesn't seem to let you disable it, V-Sync is going to limit your FPS to 30FPS every time you drop below 60FPS. If that's the case, than this is largely meaningless as the game is purposely limiting performance because it's keeping the FPS locked between 60 and 30 for its physics. I'm at 120Hz, but given that the game won't run that fast with these settings, I don't think that matters. I think its limiting me to 60FPS no matter what. When this game came out, 120Hz displays were certainly not common.

EDIT: Using your settings, it may be limiting me to 60Hz. It's set to "Auto", so that may be what the game's doing. I do not know.

If that's the case, then this is less than even an academic exercise because we can't actually infer that performance is truly CPU limited or that newer CPU's don't benefit this game. Not if V-Sync is kicking us back to 30FPS just for dropping to 59FPS. Basically, it will always run this badly.

As a side note, I've never been able to get FRAPS to run on any of my machines running Windows 10 build 1903 or newer. It's long since stopped being supported.

EDIT:

Looking at the options, I think I need to set for 100Hz, and fix the V-Sync to "full". I think limiting to the automatic settings may be problematic.

Here is what I get when I set the game to 100Hz and "Full" for the V-Sync:

upload_2019-12-6_12-15-13.png


The rest of the values are unchanged.
 
Last edited:
The average frame rates were actually run over a course of five minutes after the initial load. It took me forever just to figure out how to get the damn train moving so that I could record the data. I didn't want to do it while the train was still.

As I said, I suspect that V-Sync may be the real cause of the low performance. Since the game doesn't seem to let you disable it, V-Sync is going to limit your FPS to 30FPS every time you drop below 60FPS. If that's the case, than this is largely meaningless as the game is purposely limiting performance because it's keeping the FPS locked between 60 and 30 for its physics. I'm at 120Hz, but given that the game won't run that fast with these settings, I don't think that matters. I think its limiting me to 60FPS no matter what. When this game came out, 120Hz displays were certainly not common.

EDIT: Using your settings, it may be limiting me to 60Hz. It's set to "Auto", so that may be what the game's doing. I do not know.

If that's the case, then this is less than even an academic exercise because we can't actually infer that performance is truly CPU limited or that newer CPU's don't benefit this game. Not if V-Sync is kicking us back to 30FPS just for dropping to 59FPS. Basically, it will always run this badly.

As a side note, I've never been able to get FRAPS to run on any of my machines running Windows 10 build 1903 or newer. It's long since stopped being supported.

EDIT:

Looking at the options, I think I need to set for 100Hz, and fix the V-Sync to "full". I think limiting to the automatic settings may be problematic.

Here is what I get when I set the game to 100Hz and "Full" for the V-Sync:

View attachment 205075

The rest of the values are unchanged.

FWIW, I did my testing while the train was stationary.
 
The average frame rates were actually run over a course of five minutes after the initial load. It took me forever just to figure out how to get the damn train moving so that I could record the data. I didn't want to do it while the train was still.

Ok, that may explain it. Don't move the train. Just let the session open and have it just sit there. Here are the instructions, note #14: https://hardforum.com/threads/riddle-me-this-better-ipc.1989445/page-2#post-1044416618

I've never actually used the Steam version of TS12 before, and there is a new option I've never used that may be interesting. In the game, click on the Main Menu and under General Settings tick the box next to "Show performance characteristics in driver" under "Advanced settings for content developers".

Also, if you place the attached trainzoptions.txt file in the root directory where trainz.exe is and in the userdata sub-directory you will get a crude profiler display. Put it in both places because somewhere along the way Auran moved it.

IIRC, if the plot gets more than 1/2 way up the graph the game will stutter while running it in VSYNC and actually running content. I think it is just an indication of load, perhaps you can make more of it.

I don't think there is a built in frame rate display. FRAPS and MSI Afterburner OSD work well. I suspect other 3rd party framerate overlays will work as well.

-Mike
 

Attachments

  • trainzoptions.txt
    27 bytes · Views: 0
A couple of things:

1.) The reason I do not use FRAPS (beyond it not working in Windows 10 for me) is that eyeballing the FPS is not a good indicator of performance. Frameview gives me the actual numbers, not just what i eyeballed at the time. It also gives me frametimes, etc. I have more usable data. As I've said on numerous occasions, the FPS you eyeball, even FPS averages do not tell you the whole story. If i did that, I would assume the Threadripper 2920X, Ryzen 2700X, Ryzen 3000 series would perform as well or better in Destiny 2 than the Core i9 9900K @ 5.0GHz. This is simply not so. I could tell simply by playing the game, and the data helped put into context what was actually going on.

2.) I do not understand the logic of letting the train sit there. That's a static scene and doesn't paint a full picture of what's going on or what you could expect when actually playing the game. There is a reason why reviewers do not benchmark anything with a static scene. It's pointless. Game play isn't static like that.

Anyway, here is what I got from the 10980XE @ 4.7GHz. This is also with DDR4 3800MHz RAM @ 18,19,19,39@1T. Compared to the Core i9 9900K @ 5.0GHz using DDR4 3600MHz RAM @ 16,16,16,36@1T. Both are using 32GB of DDR4 RAM, but obviously the 10980XE is in quad-channel mode while the 9900K is in dual-channel mode. Both of these systems are using Intel SSD 740 NVMe drives for their games. However, the 9900K does have a better video card. It has a factory overclocked GeForce RTX 2080 Ti Aorus Xtreme 11G from GIGABYTE and the 10980XE is using an MSI RTX 2080 Super that's factory overclocked. The precise model escapes me at the moment.

upload_2019-12-6_12-50-33.png


Which of course performs worse than the 9900K does. This is in line with my prediction and it brings the idea that Super Pi results translate to performance into question as their Super Pi scores were nearly the same, but in games, (TS12 specifically), they do not perform the same. I tested these using your settings and these are the results I got.

Render average is lower on the 10980XE than expected and the Core i9 9900K results are higher than expected. This is with actual movement over a period of at least 240 seconds.

EDIT: Here we can see the results while stationary. I think these numbers are as useless as a piss crusted pubic hair on a toilet rim, but here it is for the 10980XE.

upload_2019-12-6_13-7-33.png


I'm going to test this, but the 10980XE was limited to 60Hz, and I left that value on "auto" in the settings. I think the numbers are lower primarily because of the limitations imposed by V-Sync. Basically, it will pretty much always be below 60FPS and get cut in half.
 
Last edited:
A couple of things:

1.) The reason I do not use FRAPS (beyond it not working in Windows 10 for me) is that eyeballing the FPS is not a good indicator of performance. Frameview gives me the actual numbers, not just what i eyeballed at the time. It also gives me frametimes, etc. I have more usable data. As I've said on numerous occasions, the FPS you eyeball, even FPS averages do not tell you the whole story. If i did that, I would assume the Threadripper 2920X, Ryzen 2700X, Ryzen 3000 series would perform as well or better in Destiny 2 than the Core i9 9900K @ 5.0GHz. This is simply not so. I could tell simply by playing the game, and the data helped put into context what was actually going on.

2.) I do not understand the logic of letting the train sit there. That's a static scene and doesn't paint a full picture of what's going on or what you could expect when actually playing the game. There is a reason why reviewers do not benchmark anything with a static scene. It's pointless. Game play isn't static like that.

This is because you don't fully understand the way this particular game behaves and how to tune it. The FPS averages plus CPU plus GPU tuning have enabled me to get this game butter smooth, at its best settings for a particular CPU, GPU, and content. The problem I have is these best settings are a bit of a bummer for me.

Letting the train just sit there makes the test repeatable. Dynamics makes the unrestricted frame rate move all over the place. If we get to it, a dynamic test is going to be a saved session where the sim drives the train and FPS data is collected over a specified interval. Again, to make the test repeatable.

Per the bolded red in your post above, I agree. What we are doing here isn't a complete benchmark yet. The dynamic brings in a whole other world including compromising on settings to deal with scenery popping and image quality. The point is, if the game can't maintain 30 FPS or 60 FPS everywhere on the map you care about it will stutter when the image is moving. Thus, when tuning, it is pointless to consider the dynamics until you can get the static image above those minimum frame times.

You have a lot of experience tuning many more games on many more systems than I've ever seen, but I've dicked around with this one for over a decade and I am an expert on how it behaves. A big part of the battle is finding these places were the frame rate crashes due to CPU limiting. There are also places that cause huge, sustained, frame rate dips due to GPU limiting.

Anyway, here is what I got from the 10980XE @ 4.7GHz. This is also with DDR4 3800MHz RAM @ 18,19,19,39@1T. Compared to the Core i9 9900K @ 5.0GHz using DDR4 3600MHz RAM @ 16,16,16,36@1T. Both are using 32GB of DDR4 RAM, but obviously the 10980XE is in quad-channel mode while the 9900K is in dual-channel mode. Both of these systems are using Intel SSD 740 NVMe drives for their games. However, the 9900K does have a better video card. It has a factory overclocked GeForce RTX 2080 Ti Aorus Xtreme 11G from GIGABYTE and the 10980XE is using an MSI RTX 2080 Super that's factory overclocked. The precise model escapes me at the moment.

View attachment 205082

Which of course performs worse than the 9900K does. This is in line with my prediction and it brings the idea that Super Pi results translate to performance into question as their Super Pi scores were nearly the same, but in games, (TS12 specifically), they do not perform the same. I tested these using your settings and these are the results I got.

Render average is lower on the 10980XE than expected and the Core i9 9900K results are higher than expected. This is with actual movement over a period of at least 240 seconds.

EDIT: Here we can see the results while stationary. I think these numbers are as useless as a piss crusted pubic hair on a toilet rim, but here it is for the 10980XE.

View attachment 205090

I'm going to test this, but the 10980XE was limited to 60Hz, and I left that value on "auto" in the settings.

There may be a differences in average due to the measurement. I've always used FRAPS/Rivatuner OSD frame rate numbers. I know you don't like those, but if you could grab one for comparison, that would be helpful.

It's not clear to me what configurations the Frameview results are for in your post above.

EDIT: Here we can see the results while stationary. I think these numbers are as useless as a piss crusted pubic hair on a toilet rim, but here it is for the 10980XE.

That's because you don't fully understand how to tune this game. And it's not like I'm holding back the other steps as some sort of game. We are having enough trouble just getting reliable results for this step. Once we are on the same page, we can move forward if interested.

Something worth mentioning is this game's tuning of CPU/GPU is very independent and maybe that is what is tripping the understanding up. The CPU/GPU settings only really interact at the final stages of getting everything tuned up. It appears the CPU generates the next frame while the GPU renders the last frame. Thus, the CPU and GPU are free to use up a whole frame time each before stutters happen while VSYNC'ed. It is not a twitch shooter. It doesn't ever need more than 50-60 FPS. Input lag doesn't really matter either. Another thing is the dynamics in this game are very slow compared to an action title like DOOM.

-Mike
 
This is because you don't fully understand the way this particular game behaves and how to tune it. The FPS averages plus CPU plus GPU tuning have enabled me to get this game butter smooth, at its best settings for a particular CPU, GPU, and content. The problem I have is these best settings are a bit of a bummer for me.

Let me stop you right there. I am getting a pretty good idea of how this game works. Poorly. Be that as it may, tuning this or any other game is pretty much the same. You find the settings that allow the game to run best. Each game is different yes, but this game doesn't provide that many options for the task. That's not what I'm doing because I'm using your settings. What I am seeing by using Frameview, is how the game actually works. At this point, I have more data about what the game is actually doing than you do. I have seen how it behaves when you actually move the train and from a static position as requested. I have also tested it beyond 60Hz, etc. I'm simply observing what's going on.

And I'm telling you that eyeballing some FRAPS number isn't scientific and it isn't accurate, nor does it paint the entire picture. Like your Super Pi method, you are approximating performance using this method. As I've indicated before, you could have the same average or observed frame rate in FRAPS, yet that might not be the case. Frameview is free. I urge you to expand your data set and understanding of the game you care so much about by actually getting real, repeatable and accurate data. Eyeballing FRAPs isn't that.

Telling me I don't know how to tune this ancient game is at this point, nonsense. These are your options:

upload_2019-12-6_14-18-44.png
upload_2019-12-6_14-18-59.png


This isn't complex. Not at all. And I can see precisely what your settings are doing. You are making the game CPU limited. That's fine. I do it all the time for everything I test. But understand, you don't need years of experience to figure this out. There isn't much to tune here for CPU and then GPU as you put it.

Letting the train just sit there makes the test repeatable. Dynamics makes the unrestricted frame rate move all over the place. If we get to it, a dynamic test is going to be a saved session where the sim drives the train and FPS data is collected over a specified interval. Again, to make the test repeatable.

Let's get something straight. Your train, is on a track. If I get it moving, I can recreate the test as many times as I need to manually. This isn't complicated. Let's not be absurd. The tests will perform within a margin of error and we can account for that with multiple passes. Sure, if we can save a run and repeat it, that's best but the data will tell us if we got things right by matching up in each run. I benchmark games that do not have built in benchmarks. Guess what? I can get the same results within 1-3% FPS easily. I do this for Destiny 2 all the time.

Per the bolded red in your post above, I agree. What we are doing here isn't a complete benchmark yet. The dynamic brings in a whole other world including compromising on settings to deal with scenery popping and image quality. The point is, if the game can't maintain 30 FPS or 60 FPS everywhere on the map you care about it will stutter when the image is moving. Thus, when tuning, it is pointless to consider the dynamics until you can get the static image above those minimum frame times.

No, it really doesn't. This is flat out wrong. No one does this for any game because its nonsense and meaningless. The flaw in your argument is that you make the assumption that the initial static scene is representational of the whole. Here is a hint: It isn't.

You have a lot of experience tuning many more games on many more systems than I've ever seen, but I've dicked around with this one for over a decade and I am an expert on how it behaves. A big part of the battle is finding these places were the frame rate crashes due to CPU limiting. There are also places that cause huge, sustained, frame rate dips due to GPU limiting.

That experience does come into play here. However, I will concede that it's CPU limited in the most rudimentary way. That's the part that I was wrong about and that's the part that surprised me. You won't find those spots without leaving the station. Just saying............ Beyond that, I can say that the initial spot is more demanding than the average run of the track that I did. However, its meaningless to talk about tuning in a game that doesn't have anything to work with for tuning. When a game doesn't run right, you basically have two choices. Lower your settings until it runs acceptably (or as well as you can make it run) or throw faster hardware at it. You can try and alter settings to favor a stronger component. If you have a 2600K and an RTX 2080 Ti, leaning on GPU bound settings may improve your situation, but its a quick thing to try and find out.

You can have a decade or more on this game, but that doesn't amount to much based on what I can see here. I'm not trying to be insulting, but the fact is there isn't anything to tune. Once you make your CPU and GPU as fast as they can be, that's it. From there its just a matter of playing with settings to get the most playable result possible. The details change for each game, but the overall concept is the same for every game I've ever seen including this one. We just have less to work with in TS12 than we usually do.

There may be a differences in average due to the measurement. I've always used FRAPS/Rivatuner OSD frame rate numbers. I know you don't like those, but if you could grab one for comparison, that would be helpful.

If by differences you mean accuracy? Then yes. As I said, using FRAPS or RivaTuner OSD only allows you to observe the frame rate manually. What you need to understand is that these results can change so quickly that the OSD doesn't capture the change or your eyes don't register them. I never saw the FPS counter in Destiny 2 drop to 36FPS on my Threadripper 2920X. I could feel the frame rate drops but only saw numbers in the mid-40's. But the data showed that it was much worse. Similarly, the ultra low minimums in TS12 aren't something you see either. They happen quickly. The frame time numbers BTW actually indicate how smooth a game is. You do need min/avg/max numbers of course, but often the frame times are more important. The min/avg/max is a quick and dirty way to compare vast lists of hardware at once, but you really need all the information when making a decision about a CPU.

Again, FRAPS doesn't work on these systems. I don't know why, but I haven't been able to make it work. An OSD number is inaccurate anyway. Use Frameview, and lets see what's really going on here. You may find that some of these CPU's are a bigger upgrade than you would have thought. Or not, but at least you'll know for sure.

It's not clear to me what configurations the Frameview results are for in your post above.

There isn't much to configure. It's simple.

upload_2019-12-6_14-39-48.png


The overlay feature doesn't work in all games. It usually works in DX11 games, but not 12. It doesn't seem to work here at all. Disregard that part of the configuration. I just leave it at default values.


That's because you don't fully understand how to tune this game. And it's not like I'm holding back the other steps as some sort of game. We are having enough trouble just getting reliable results for this step. Once we are on the same page, we can move forward if interested.

Again, I provided the static result as well. I think its useless, but I provided it. Furthermore, let's not be obtuse. There isn't anything to tuning this game. There are what, four or five settings? Seriously. There are more tuning options available on my microwave.

Something worth mentioning is this game's tuning of CPU/GPU is very independent and maybe that is what is tripping the understanding up. The CPU/GPU settings only really interact at the final stages of getting everything tuned up. It appears the CPU generates the next frame while the GPU renders the last frame.

Sorry, that's not how it works. Physics are calculated by the CPU in the absence of something like PhysX or other compute instructions being able to leverage the GPU's compute functions. The visual information is always drawn by the GPU. At lower settings, we are bound more by the CPU than the GPU. At higher settings, we run into a unique issue where we are bound by the CPU anyway, regardless of what it is. However, GPU vs. CPU loading is still the same as it is for anything else. The problem here is that the physics engine of the game is too primitive to leverage modern CPU instructions, multi-threaded designs, or more complex GPU functions.

Thus, the CPU and GPU are free to use up a whole frame time each before stutters happen while VSYNC'ed.

No, not how that works. If you believe this is so, please provide actual evidence supporting it. I've seen simulations before and plenty of game engines and that's just not how anything works. Remember, you are the one making the claim and thus, the burden of proof is on you. That said, the issue is that I think your right in that physics are tied to the frame rate. This may ultimately be why actual CPU performance is largely meaningless here. If so, we can never correct it and this is as good as the game will get. This isn't a unique design either. Plenty of games tied their physics to frame rates. EA did this with one of the awful Need for Speed PC ports. Setting it to 60FPS made the game's physics and everything else operate twice as fast. While hilarious, this was a poor design. I think, based on the data I've collected, that this is what we are seeing here.

The game is demanding to keep a processor at 5.0GHz or less from comfortably crossing 60FPS. Thanks to the built in and can't be disabled V-Sync functions, we can't get past 30FPS and some change because the stupid game won't let us. Sure, it does support V-Syncing to 100Hz, etc. but we can't reliably get there.

It is not a twitch shooter. It doesn't ever need more than 50-60 FPS. Input lag doesn't really matter either. Another thing is the dynamics in this game are very slow compared to an action title like DOOM.

-Mike

I don't disagree with any of this and it doesn't matter. The point is, that so long as the game is incapable of leveraging modern CPU instructions or additional cores, it's likely always going to be limited like this. Even going to 3440x1440 @ 100Hz, Full this stupid thing still sits at about 30FPS. That tells me we are still CPU bound and that this thing can't really leverage my GPU. I can do some more testing on that, but I suspect that's the case. With V-Sync, it will always appear relatively smooth. That's what it does. It's absolutely smooth on my 9900K system with G-Sync.

More on that point, G-Sync may be the reason why I can hit 45FPS on the 9900K system. It may be overriding the game's behavior regarding V-Sync.
 
More on that point, G-Sync may be the reason why I can hit 45FPS on the 9900K system. It may be overriding the game's behavior regarding V-Sync.

I also had G-sync enabled on my tests. If we had frameview across the board surely that would only serve to increase the accuracy of results.

It is worth noting, the in-game video settings also play a large role in forcing the game to be CPU bound, but it's still not rocket science.
 
And I'm telling you that eyeballing some FRAPS number isn't scientific and it isn't accurate, nor does it paint the entire picture. Like your Super Pi method, you are approximating performance using this method. As I've indicated before, you could have the same average or observed frame rate in FRAPS, yet that might not be the case. Frameview is free. I urge you to expand your data set and understanding of the game you care so much about by actually getting real, repeatable and accurate data. Eyeballing FRAPs isn't that.

Telling me I don't know how to tune this ancient game is at this point, nonsense.

A benchmark of science is *repeatable* results. In my data, FRAPS/AB OSD FPS results are repeatable. It won't pick fine details out, but, again, it is repeatable.

Thinking you are an expert in a game you have never seen before comes across as arrogant.

These are your options:
View attachment 205103View attachment 205104

This isn't complex. Not at all. And I can see precisely what your settings are doing.

You missed all the in-game options which are important too - setting them is in the instructions for TS12 Bench 1. Did you set these for your testing?

Let's get something straight. Your train, is on a track. If I get it moving, I can recreate the test as many times as I need to manually. This isn't complicated. Let's not be absurd.

The static test is simply a simple test. If we can't get correlation with this simple test, adding dynamics is pointless.

Beyond that, I can say that the initial spot is more demanding than the average run of the track that I did. However, its meaningless to talk about tuning in a game that doesn't have anything to work with for tuning. When a game doesn't run right, you basically have two choices. Lower your settings until it runs acceptably (or as well as you can make it run) or throw faster hardware at it.

That isn't the most demanding spot in the game. It is a convenient spot for people unfamiliar with the game to try in the simplest possible way.

You are right, the game is at the "throw faster hardware it" stage. This is what motivated the original question and the SPI method in that post answered no current CPU will do the job. Data posted is evidence of that - your posted SPI and framerate data is evidence of that.

I'm not trying to be insulting, but the fact is there isn't anything to tune. We just have less to work with in TS12 than we usually do.

First sentence isn't true, see in-game settings above. Second sentence is true. In game settings are trickier because "good enough" gets subjective.


No, not how that works. If you believe this is so, please provide actual evidence supporting it. I've seen simulations before and plenty of game engines and that's just not how anything works.

My CPU/GPU balance conclusion comes from watching the game work. I've seen it run smooth at 80%+ CPU and 90% GPU. If you run it long enough, you will see it too.

More on that point, G-Sync may be the reason why I can hit 45FPS on the 9900K system. It may be overriding the game's behavior regarding V-Sync.

I've never used this game with G-Sync. I've never used G-Sync. I have no idea what G-Sync will do with this. <--- See how that works, it's easy, try it sometime.

That said, I don't believe G-Sync will help the CPU limited case. It could help with TANE, but what the game engine will do with it is crap shoot. Could be smooth as butter, could be jerky -- I'm not willing to spend the $$$$ enabling G-Sync to find out.

-Mike
 
A benchmark of science is *repeatable* results. In my data, FRAPS/AB OSD FPS results are repeatable. It won't pick fine details out, but, again, it is repeatable.

Sort of repeatable. What if you blink? Seriously, you can't be this obtuse. This is a ballpark estimate at best.

Thinking you are an expert in a game you have never seen before comes across as arrogant.

Acting like this is some super complex task that took years to master and no one could possibly understand it but you comes across as arrogant. It's not hard to figure this out.


You missed all the in-game options which are important too - setting them is in the instructions for TS12 Bench 1. Did you set these for your testing?

No I didn't and yes in that order.

The static test is simply a simple test. If we can't get correlation with this simple test, adding dynamics is pointless.

Wrong. The static test is pointless and your dynamic "test" isn't as dynamic as you seem to think it is. It's a train, on some tracks. It's easily repeatable.


That isn't the most demanding spot in the game. It is a convenient spot for people unfamiliar with the game to try in the simplest possible way.

I never said it was. But the data gathered there alone, isn't useful for determining overall performance.

You are right, the game is at the "throw faster hardware it" stage. This is what motivated the original question and the SPI method in that post answered no current CPU will do the job. Data posted is evidence of that - your posted SPI and framerate data is evidence of that.

I'm not disputing what the data shows. However, data can lead to the wrong conclusion. Your conclusion is that newer CPU's aren't faster at single-threaded applications than older ones. We know this isn't true. The only examples which confirm this idea are few and far between. Basically, TS12 and Crysis 1 are all that come to mind off hand. I don't think that's true. I think V-Sync and the fact that physics is tied to it is the main problem. The newer CPU's don't put enough of a lead over the old ones to clear 60FPS consistently. Therefore, V-Sync always kicks us back down to 30FPS. The Frameview data seems to support this idea. If you look at the data for the 10980XE, you can see Render Max = 60FPS and Render average = 30FPS. We don't really see anything in between unless G-Sync is in play.

Again, this is a theory which I think requires further testing to call 100% true, but that's what I'm seeing. The data and the facts seem to line up to support this conclusion.


First sentence isn't true, see in-game settings above. Second sentence is true. In game settings are trickier because "good enough" gets subjective.

I didn't post them, but I didn't forget them initially. I set those options. Again, don't act like there is great depth to the configuration of this game. Setting the clock on my oven is more difficult than this.

My CPU/GPU balance conclusion comes from watching the game work. I've seen it run smooth at 80%+ CPU and 90% GPU. If you run it long enough, you will see it too.

OK, you reached a conclusion based on observation without an underlying understanding of how games or simulations work as they relate to GPU's and CPU's. Got it.

I've never used this game with G-Sync. I've never used G-Sync. I have no idea what G-Sync will do with this. <--- See how that works, it's easy, try it sometime.

If I have truly no idea how something works, I'll say that. When I have related experience that allows me to make a prediction, I will. Sometimes I'm right and sometimes I'm wrong. It's that simple. I already did this when you asked about Fallout 4 and I said: "I don't know." I did have the answer for GTA IV, because I have used it, tested it and obtaining data for it is easy.

That said, I don't believe G-Sync will help the CPU limited case. It could help with TANE, but what the game engine will do with it is crap shoot. Could be smooth as butter, could be jerky -- I'm not willing to spend the $$$$ enabling G-Sync to find out.

-Mike

My point is that the NVIDIA drivers can and do override application settings. Generally, the driver is set to enhance them. Meaning that it can add variables on top of what's already being done. For example: If you have a game that only supports TXAA, then through the driver you can add MSAA on top of it.

G-Sync being enabled, means that my monitor will do values in between 30Hz and 60Hz. That's how I believe it was able to show results of 45FPS in the Frameview data. With V-Sync, your basically going to always get 60, 30 or 15. My theory is that this game's physics being tied to the frame rate and thus V-Sync are the problem. The results seen on the 9900K seem to support this idea, though I wouldn't call it conclusive. More testing than I'm willing to do on this would be required before I would call that certain. But it is a theory that does fit the facts given that we've pretty much only seen CPU's achieve roughly 30FPS or so regardless of how powerful they are. Keep in mind the Frameview data shows 30FPS average and 60FPS max for display. That tells us that the game does hit 60FPS, but when it falls short, it kicks down to 30FPS.

Thus, we are CPU limited, but it's not because newer CPU's aren't faster. They just aren't fast enough to clear the 60FPS hurdle to avoid V-Sync knocking our frame rates back down to 30FPS. The only way to know for sure would be to take something that can clock high and boost it to some ridiculous clock speed and see if that changed anything with TS12. I don't have the setup for LN2, so I can't test that.

I think the game engine is way too limited to ever show improvements on modern hardware. I'm not sure it was designed this way other than tying physics to frame rate is easy.
 
Last edited:
I can run g-sync too if anyone is interested, I deliberately disabled it
 
Sort of repeatable. What if you blink? Seriously, you can't be this obtuse. This is a ballpark estimate at best.
Acting like this is some super complex task that took years to master and no one could possibly understand it but you comes across as arrogant. It's not hard to figure this out.

Won't matter if you blink. If you are wrong in your attempted settings when you go to really run the program there will be hitches in the motion. The game isn't complex, it's just tedious and time consuming to tweak, determine compromises, and look at almost non-existent HW upgrades. In actual application, if you wanted a 50 FPS minimum using this approach you would shoot for a measured/predicted minimum in the high 50s.


I'm not disputing what the data shows. However, data can lead to the wrong conclusion. Your conclusion is that newer CPU's aren't faster at single-threaded applications than older ones. We know this isn't true. The only examples which confirm this idea are few and far between. Basically, TS12 and Crysis 1 are all that come to mind off hand.

I'm only concerned about this game, FSX, and perhaps P3D at this time. Don't really care about other single-threaded apps didn't mean to make that general of a statement. In the single threaded apps in my world, the newer CPUs aren't faster. I suspected it, this thread has proved it.

I don't by computer hardware to run every game and run every program. I buy them to run the games and programs I care about, which means other games/apps that can leverage the newer CPUs offer zero motivation to me to make that upgrade. It's not rocket science.

OK, you reached a conclusion based on observation without an underlying understanding of how games or simulations work as they relate to GPU's and CPU's. Got it.

Conclusion based on observation is a valid approach. Don't have to fully understand weather to understand it is raining when I see water drops falling from the sky.

(G-sync commentary removed for brevity)
I think the game engine is way too limited to ever show improvements on modern hardware. I'm not sure it was designed this way other than tying physics to frame rate is easy.

I don't know if bringing G-Sync into the conversion is a good idea. We aren't really see eye to eye as it is. I suspect G-Sync will be benign with respect to the CPU limited stuff. What I will say, is at 30FPS one can get the game smooth. The problem with 30FPS is long panning shots will have noticeable motion judder. It is the same kind of judder one sees occasionally on panning shots in 24 FPS film. Flim doesn't look so bad because the director/cinematographer try to shoot around the limitation. Bottom line is the game looks much better at 50 FPS because of this. So, G-Sync, even if it keeps the motion smooth down in the 30s and below, is still an insufficient solution.

That said, I'd be interested in finding out what G-Sync would really do here - just not interested enough to buy my own G-Sync HW. In results, reporting whether G-Sync is present and enabled, present and disabled, or not present is probably a good idea.

Physics tied to it make it easy makes perfect sense. The game was developed in the mid-90's by a small team. There isn't much money in the genre. Flight sims, which are similar, enjoy a much larger market, and they are struggling as well. In several releases Lockheed-Martin has only made very small improvements over FSX in the years they have been developing it.

I won't have much time for this until next week, but I may be able to sneak in a TS Bench 1 using that Nvidia tool over the weekend.

-Mike
 
the newer CPUs aren't faster. I suspected it, this thread has proved it.

You state this as gospel, yet we have some data points from this very thread that support this theory and some that don't. That's my entire problem here.
 
the newer CPUs aren't faster. I suspected it, this thread has proved it.

You state this as gospel, yet we have some data points from this very thread that support this theory and some that don't. That's my entire problem here.

What data points don't support it?

Also, please don't quote me out of context. The whole quote, which you mischaracterized, is

In the single threaded apps in my world, the newer CPUs aren't faster. I suspected it, this thread has proved it.

-Mike
 
Ok so Gsync doesn't alter my results - I think this is simply a poorly coded game, and think that the software loop is limiting the performance
 
In spite of the eyebrow lifting arrogance, this is an interesting thread.
Many questions about true single threaded performance abound. Have we come a long way or just evolved through software and CPU extensions into a modern form that may not show tangible benefits in single or limited threaded software when compared to older hardware especially CPU when run at similar clock speeds? Speeds which we can't move past apparently!
 
Ok, there is a lot to unpack here. I suspect for a lot of you this is TL:DR unless you like geeking out about this stuff.
-Mike

I got about halfway through and gave up, because of the incredibly low S/N ratio, and terrible way you micro quote, and micro respond a sentence at a time. One thing you certainly haven't mastered is the art of communicating.
 
In spite of the eyebrow lifting arrogance, this is an interesting thread.
Many questions about true single threaded performance abound. Have we come a long way or just evolved through software and CPU extensions into a modern form that may not show tangible benefits in single or limited threaded software when compared to older hardware especially CPU when run at similar clock speeds? Speeds which we can't move past apparently!

There is a reason why we get more cores iso faster cores and also why there is a trend towards chiplets like AMD with Ryzen. I'm sure the can make faster but simpler CPU's which would give a status quo in performance.

But bad software is bad software and no matter how much performance you throw at it, it will not work significantly better.

I think it was in an interview with David Kanter where I read that he thinks big IPC jumps are (for now) a thing of the past, what AMD did with Ryzen was pretty huge, but only closed the gap with intel and that they would now also have to do like intel and give IPC increases of 5-10% per updated architecture.
 
What data points don't support it?

Also, please don't quote me out of context. The whole quote, which you mischaracterized, is



-Mike

This was not my intention, if taken out of context that would be mis-characterized, but we've only been talking about one game and one synthetic test for days now, so I believe that you're nitpicking here.

Here is a summary of data,

2600k 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
9900K(2) 24 32M SPI ???. TS12 - 45FPS (Average - Frameview - Train Moving)
9900K(2) 24 32M SPI ???. TS12 - 41.7FPS (Average - Frameview - Train Moving)
10980XE 24 32M SPI ???. TS12 - 29.8FPS (Average - Frameview - Train Moving)
10980XE 24 32M SPI ???. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

If it looks like it's all over the place, I'd agree with you.

In the single threaded apps in my world, the newer CPUs aren't faster. I suspected it, this thread has proved it.

Here's your full quote, I still say this is entirely unsupported. You need more tests, at the very least.
 
I got about halfway through and gave up, because of the incredibly low S/N ratio, and terrible way you micro quote, and micro respond a sentence at a time. One thing you certainly haven't mastered is the art of communicating.

Dan's points were all over the map, and I was trying to address them without confusion. I agree it did not work well. Will be sticking to the data.

-Mike
 
This was not my intention, if taken out of context that would be mis-characterized, but we've only been talking about one game and one synthetic test for days now, so I believe that you're nitpicking here.

Here is a summary of data,

2600k 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
9900K(2) 24 32M SPI ???. TS12 - 45FPS (Average - Frameview - Train Moving)
9900K(2) 24 32M SPI ???. TS12 - 41.7FPS (Average - Frameview - Train Moving)
10980XE 24 32M SPI ???. TS12 - 29.8FPS (Average - Frameview - Train Moving)

10980XE 24 32M SPI ???. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

If it looks like it's all over the place, I'd agree with you.
If you go back to the original post, no specific game was mentioned and the claim was narrowly focused. I only brought TS12 because I was asked and it was also so cheap. Frankly, it wasn't my intention to go into this level of detail here, but there are interesting parts to it and I really do appreciate those that have collected data.

I put strike through in the data above that don't conform to the instructions for TS12 Bench 1 - they are not comparable to the stationary data. The last line needs a SPI32M score. If you look at the comparable data, they all indicate a very small improvement at best:

2600k 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
10980XE 24 32M SPI ???. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

As shown earlier, the 9900k(1) and 2600k scores correlate almost perfectly. The 3900x is the interesting result because it beat the other two in TS12 despite having a lower clock rate and lower SPI32 score. Also, the 3900x shows an improvement, but is isn't enough. The very last, for the 10980XE, shows a decrease in performance, and even without the SPI score, it correlates with the premise of at least some single threaded apps getting little boost from current CPUs.

We need another Ryzen 3 score at a different clock rate to see if the ratio of performance of SPI to TS12 tracks with the AMD architecture as well as Intel.

-Mike
 
If you go back to the original post, no specific game was mentioned and the claim was narrowly focused. I only brought TS12 because I was asked and it was also so cheap. Frankly, it wasn't my intention to go into this level of detail here, but there are interesting parts to it and I really do appreciate those that have collected data.

I put strike through in the data above that don't conform to the instructions for TS12 Bench 1 - they are not comparable to the stationary data. The last line needs a SPI32M score. If you look at the comparable data, they all indicate a very small improvement at best:

2600k 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
10980XE 24 32M SPI ???. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

As shown earlier, the 9900k(1) and 2600k scores correlate almost perfectly. The 3900x is the interesting result because it beat the other two in TS12 despite having a lower clock rate and lower SPI32 score. Also, the 3900x shows an improvement, but is isn't enough. The very last, for the 10980XE, shows a decrease in performance, and even without the SPI score, it correlates with the premise of at least some single threaded apps getting little boost from current CPUs.

We need another Ryzen 3 score at a different clock rate to see if the ratio of performance of SPI to TS12 tracks with the AMD architecture as well as Intel.

-Mike

Except, I provided Super Pi scores for the two platforms I tested:

I'll fire the Super Pi benchmark off on a 5.0GHz 9900K and a 4.7GHz 10980XE and report back.

EDIT: Here are the numbers:

Core i9 10980XE @ 4.7GHz (All core)
6m 49.669s

Last two numbers:
6m 20.790s
6m 35.146s

Core i9 9900K @ 5.0GHz (All core)
6m 48.140s

Last two numbers:
6m 20.360s
6m 34.590s

I'm not sure why TXE36 cares about those numbers as that's not the reported result, just what it reports for those last two loops.

I've provided both the actual score provided by Super Pi as well as the last two loop numbers. When I brought up you ignoring the last number, you claimed it was because of storage, that was nonsense as I tested on mechanical hard drives. Yet my scores were slightly better than the other 9900K numbers provided. So, that's not it. So can we knock of this last loop nonsense?

I provided moving AND stationary numbers for TS12. You seem to have a habit of ignoring anything that isn't convenient, nor fits your narrative. For example, you go into all this crap about the "dynamic" and the fact is, no one benchmarks ANY game out there sitting in place. Why? It isn't representational of what you can expect playing the game. Why? Because games are dynamic. If you want a real worst case scenario to tune from, you find the worst part of a game with the lowest performance as your minimal standard to bench from. This should never be a static anything. The game supports saving runs, so why not save a run and pass the file around so that we can all be on the same page.

I don't know why you insist on this Super Pi correlation when both the 3900X numbers and 10980XE numbers prove it doesn't correlate. The 10980XE has Super Pi scores identical to the 9900K and performed worse for the exact reasons I said it would. The 3900X had a lower Super Pi result and scored better in TS12 than the other systems. I think it's safe to say that Super Pi, while an OK ballpark with your math, isn't a great way to predict performance in TS12. You go on and on about scientific this or that and your methods aren't scientific. They aren't representational of the game's performance as a whole and the methodology from top to bottom is flawed. You want to sit there and say what we are discussing about other games and what effects them doesn't apply, but that's not so. HEDT performs worse in TS12 than mainstream hardware. That's the same. Super Pi results do not directly correlate to performance, that's the same. This is just another old game that doesn't benefit from multi-threading. It's no different than Crysis in that sense.

The bottom line. All the rhetoric about tuning is nonsense. If you get a faster CPU, faster RAM etc. it won't really benefit this game because it's old. That I can agree with you on. A 2080Ti will be an upgrade sure, but probably not worth the money for this game because the game's held back on the CPU front because it can't leverage the newer CPU's. So don't upgrade. But don't kid yourself. It's not because modern hardware hasn't improved enough, it's because this game's engine is primitive. It doesn't benefit from the changes to modern CPU's because of those limitations and for no other reason. To me, this is a case closed scenario. You have no reason to upgrade without upgrading the software, which you seem unwilling to do.

Lastly, I think the V-Sync / physics limitations come into play in some form or fashion here. We'll never know without a 7GHz 9900K or something. We can't reliably crack 60FPS and until we can, V-Sync will probably always kick down to 30FPS here. Again, the software's design is a primitive one from a decade ago.
 
1) Except, I provided Super Pi scores for the two platforms I tested...

2) The game supports saving runs, so why not save a run and pass the file around so that we can all be on the same page.

3) I don't know why you insist on this Super Pi correlation when both the 3900X numbers and 10980XE numbers prove it doesn't correlate.

4) Lastly, I think the V-Sync / physics limitations come into play in some form or fashion here. We'll never know without a 7GHz 9900K or something. We can't reliably crack 60FPS and until we can, V-Sync will probably always kick down to 30FPS here. Again, the software's design is a primitive one from a decade ago.
1) Dan, you are correct, I confused it with the missing second Ryzen datapoint. Updating the apples to apples comparable data to account for this:

2600k 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
10980XE 24 32M SPI 380.79s. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

The 9900K data reported was with the train moving. It would be interesting to see the 9900K data conforming to TS Bench 1 especially because its SPI is faster than Keljian's, @380.36s

For the results to be apples to apples, SPI must be run the same way, TS12 Bench 1 must be run the same way. It actually doesn't matter which SPI loop you use -- as long as it is always the same one. ETA: TS12 Bench 1 uses loop 24.

2) Saving a game run is precisely what I'm working on now. If we don't do precisely the same thing, the data won't be comparable. Patience.

3) First part of this, without a valid TS Bench 1 score for your 9900k we don't know. Second part of this, you even said the 10980XE is a different beast and is "worse for gaming" IIRC. Do the TS Bench 1 score properly and then we can compare - I expect your 9900K score to be a smidge better than Keljian's

All of the valid reported data, SO FAR, has supported the conclusion the for the stuff I care about, these new processors are not faster. I've asked several times for examples of single-threaded apps that see a significant boost on the new arch and gotten back crickets.

4) This is mute point because we all know the game won't crack 60 FPS without the equivalent 10G 2600K which ain't happening anytime soon. Today's offerings will boost good multi-threaded apps by quite a bit more than doubling the clock rate, but for some, possibly many, singled-threaded apps, they get nearly zero.

-Mike
 
Last edited:
I feel this thread desperately needs a picture of a woman yelling at a cat.
 
Dan, thanks for that. In looking back, it looks like a miscopied a number from my spreadsheet, as my SPI score is 431, not 411. To be sure I re-ran the tests:

2600K 4.7G DDR-1866 9-9-9-27 No G-Sync: 32M SPI: 431s TS12 Bench1: 33 via AB OSD dips to 32 about once ever 10-15 seconds. So:

2600k 4.7G 24 32M SPI 411.077s. TS12 - 32FPS (Average - Eyeball - Train Stationary)
2600k 4.7G 24 32M SPI 431.044s. TS12 - 33FPS (Average - Eyeball - Train Stationary)
9900k(1) 24 32M SPI 424.57s. TS12 - 30-31FPS (Average - Eyeball - Train Stationary) (Keljian)
9900K(2) 24 32M SPI 380.36s TS12 - 37.6 (Average - Frameview - Train Stationary) 5G (Dan)
3900X 24 32M SPI 501s. TS12 - 34-36FPS (Average - Eyeball - Train Stationary)
10980XE 24 32M SPI 380.79s. TS12 - 29.2FPS (Average - Frameview - Train Stationary)

Starting from 2600K (SPI: 431s, TS12B1: 33FPS) and predicting others:

9900K(1) (SPI: 425s, TS12B1: 30.5 FPS)
SPI difference 1.4% faster, 9900K(1) prediction: 33 FPS + 0.014*33 FPS = 33.5 FPS vs Measured: 30.5 FPS​

9900K(2) (SPI: 380s, TS12B1: 37.6 FPS)
SPI difference 13.4% faster, 9900K(2) prediction: 33 FPS + 0.134*33 FPS = 37.4 FPS vs Measured: 37.6 FPS
10980XE (SPI: 381s, TS12B1: 29.2 FPS)
SPI difference 13.1% faster, 10980XE prediction: 33 FPS + 0.131*33 FPS = 37.3 FPS vs Measured: 29.2 FPS
3900X (SPI: 501s, TS12B1: 35 FPS)
SPI difference 13.9% slower, 10980XE prediction: 33 FPS - 0.139*33 FPS = 28.4 FPS vs Measured: 35 FPS
Notes about these results:
  • Keljian's machine did 3 FPS less in TS12 than predicted based on a SPI estimate from the 4.7G 2600K -9% error
  • Dan's 9900K did 0.2 FPS more in TS12 than predicted based on a SPI estimate from the 4.7G 2600K +0.5% error
  • 10980XE did 8.1 FPS less in TS12 than predicted based on a SPI estimate from the 4.7G 2600K -22% error
  • 3900X did 6.6 FPS more in TS12 than predicted based on a SPI estimate from the 4.7G 2600K +23% error
  • 9900Ks predicted pretty good. 3900X underestimated, but the 4.2G clock rate prevents the part from doing much better in the game.
2600K to 9900K didn't do 1/2 bad and it nearly got Dan's result right on the nose, Keljian's, was within 10%. The 10980XE and 3900X diverged by ~22%, but in different directions, with the Ryzen part beating the estimate and the 10980X falling short.

I wonder if the difference in the 9900Ks is Frameview versus the OSD. Feel free to check the math, it can be tedious.

I'm also going to grab a 2600K @ 3.8G tomorrow and rerun the 4.7G test again for verification. Likely will also do some back to back multiple SPI runs to see how much variance there is run to run.

With any luck I should have a dynamic bench ready late tomorrow.

-Mike
 
Back
Top