Riddle me this: Better IPC?

As the scene loads the frame rate will drop. I end up at 32FPS with a video card load of 20% at 1607MHz on my 1070TI. 1080TI should be able to do this no sweat.

I'm curious what you get with the 9900K and ~35% more video card. Thank you for doing this.

-Mike

For the record doing the things asked for above:
13.5% cpu usage (if it were single threaded it would be <10 like superpi, 100%/16 threads= 6.25% per thread)
15-16% GPU usage
30-31 FPS in fraps using the same setup
...
I also ran the game off the nvme (per sig) - disk usage was negligible


2600K -> 9900K got no improvement $$$$
DDR3 -> DDR4 got no improvement $$$$
MIVE -> Swanky new MB got no improvement $$$$
1070TI -> 1080TI got no improvement, but the 1080TI has a chance to run at higher AA settings.

^^^^^ Pretty much what SPI32 predicted, if I may

My Loop 24 32M SPI at 4.7G 1866 CAS 9 memory: 411.077s. My TS12 Test Result: 32FPS
Your Loop 24 32M SPI at 5.0G and your memory: 424.57s. Your TS12 Test Result: 30-31FPS

As pretty well expected, AVX settings didn't do squat. Perhaps they will matter when the video card is cranked up.

For reference:
9900k
Total time 7m 6.744s - I do note my memory isn't the fastest on the planet, but I need quantity over speed for what I do. (3200CL16 CR2) and memory makes a big difference to pi
So the previous two were
6m 51.077s (24)
6m 36.085s (23)
Note I changed settings to -2 AVX offset from 0, and ramped up all cores to 5.0ghz

The above does not mean the 9900K "sucks" in any way other than as an upgrade from the 2600K to specifically play TS12, it has very little bang for the buck. Perhaps that will change when we crank the video up, as I'm curious about that.

Now let's change things up a bit, shall we...

I went into my bios and only changed the multipliers from 47/47/47/47 to 35/35/35/35. Turns out the MIVE is a bit funny here and with that setting the core frequency was 3.6G ish - it looked like the 2600K was running stock. Doesn't really matter because...

32MSPI with 35x multipliers: 537.628s, TS12 Test Result: 25 FPS with 16% video card load.

Recall
32MSPI with 47x multipliers: 411.077s, TS12 Test Result: 32 FPS with 20% video card load.

The % change in 32M SPI scores is 25.9%.

25 FPS + 25 FPS * 0.259 = 25 FPS + 6.475 FPS = 31.475 FPS.

Not a bad prediction, not bad at all. You can do the same test with your rig, or if you have another machine. Just SPI32M them both and compare the same TS12 test.

Based on what I see on the screen, I think what is happening is that it is trying to continuously, serially decompress textures/objects and that is locked to some kind of cycle in the game loop which limits the performance. Whether it does this on purpose is another thing, it may be that it's locking the frame rate to 30~ FPS on purpose for game mechanic reasons- Eg physics

It is highly possible that the draw distance or some other feature is limiting it also..

I don’t believe that this is actually cpu limited, at least no more than I believe UT3 was at release where it had a leak which pegged as many threads as it could at 100% but most of that was engine waiting on things to happen, this was fixed by later versions.

Based on my many years of game/programming knowledge, I truly believe the issue here is bad coding and/or lack of optimisation due to locking code - not lack of cpu power.

If, in the extremely unlikely event I am incorrect, it is possible that a Ryzen chip of decent clock (say 4.4-4.6ghz single thread) would beat out the intel equivalent for this, being that they do have more execution pipelines/resources per core, and therefore potentially fewer opportunities to stall. This does rely on what the compiler did when it compiled the program though.

I've been working as an electrical design engineer professionally since the late 1980s - so I'm not new at this either. From a practical point of view I would suggest backing off a bit on the detail thinking and be more general - how does the game engine react when I change out CPUs, alter clock rates, change out GPUs? The underlying cause does not matter as I cannot fix it and Auran never would either. Look at it completely from the customer's point of view. Pretend you want to play TS12 the [H]ard way.

I just happened to notice years ago that this little PI benchmark that was readily available in reviews tracked differences in CPU limited performance of TS12. Nobody who knows what they are doing puts Trainz results of any kind in CPU reviews, it's just too obscure. This stand in benchmark accurately predicted how things would improve going from a 3.8G Prescott -> 4.25G C2D and from that 4.25G C2D to the 4.7-5.0G Sandy. The primary driver for the need for increased performance was not the game engine itself, but the content which is the spot that makes the platform viable. User content in Trainz is second to none for the genre.

The Ryzen question is interesting, as all the SPI32 predicting I've been doing has been Intel to Intel to Intel. One would have to try it and find out.

I've got to get to work. I hope you are game to trying upping the video settings tonight. Thanks for collecting the data.

-Mike
 
Last edited:
You brought up storage as the reason not to look at the actual number Super Pi gives you as your result and instead view loop 24. That's the only reason I brought it up. I'm not even sure that's accurate as I ran the test from a mechanical hard drive in my case.



You mean; How do I know Super Pi doesn't translate to game performance? Because it doesn't. There is a big difference between what's done in a game engine and calculating Pi over and over again. Also, experience tells me that modern processors are faster than older ones even if Super Pi doesn't really showcase this. There are plenty of other benchmarks and applications that do.

For example, cache sizes and cache design greatly impact game performance. This is literally how AMD raised the performance of its Ryzen 3000 series to be much more in line with Intel's gaming performance than its 2000 series was. AMD even calls the increased L3 cache; "Gamecache."



Changes in that benchmark and changes in Super Pi are coincidental. I'd be willing to bet they do not correlate into anything useful. Meaning, you can't use Super Pi as an indicator for how many FPS your going to get under xyz circumstance. It's the same for something far more sophisticated like 3D Mark, which is actually designed for that purpose. The variables that impact it do not necessarily translate to games. You can't say I get 8056 3D Marks, so I can get 120FPS in CoD:MW 2019. It simply doesn't work that way. Again, calculating Pi isn't the same as running a game engine. That goes for any engine.



For Super Pi? Yeah, I'd think so. Clock speed is going to matter here more than anything and the architectural changes made since Sandy Bridge are probably not going to have much of an impact here. Calculating Pi is a pretty simple task. It's hard on the CPU in a sense, but it doesn't utilize that much of it. Again, this is why you can't look at Super Pi results as an indicator of game performance. You are comparing Apples and 1973 Mustang II's. There is nothing meaningful in the comparison.



I brought up the other things because those other things you aren't interested in can and do directly impact game performance. Again, Super Pi doesn't make sense as an indicator of game performance. You are putting way too much stock in it. Again, your misinformed in thinking that Super Pi in anyway shape or form has any bearing on how different CPU's behave in games. My Super Pi times are similar on my 10980XE and the 9900K. However, the latter is far better at actually playing games.



What I said wasn't an "Intel sales pitch." First off, I wouldn't recommend an Intel processor in most cases right now. Secondly, I am trying to make a point that Super Pi doesn't mean jack shit in the realm of gaming. How do I know this? Its simple. I've literally been reviewing and working with this hardware for more than two decades. I've had all of these generations of CPU's on my test bench and I can tell you that the benchmarks show case the differences and how far we've come. Sure, its not the same as taking a Pentium III at 233MHz and comparing it to one at 1GHz, but there has been major advancements since Sandy Bridge. Many of those advances will impact your gaming experience.

It seems I'm the one who hit the raw nerve.



Tuning every game is pretty much the same. You find out what settings have the most impact on performance and decide what trade offs to make regarding performance vs. visual fidelity. As for the hardware side, its not that complicated. You get your CPU, RAM and GPU running as fast as possible while being stable. The only part of the equation that's difficult to manage sometimes are the game settings themselves. Certain shadow options or other features may impact one engine more than another in terms of performance, but due to implementation, may or may not impact visuals very much.



The data is what it is. I'm curious to see what his findings are. Either way, I'm still fairly certain that Super Pi can't be used as a meaningful benchmark for determining game performance. This is where you've seriously gone wrong here. Even if the 9900K isn't much faster with this game than your 2600K, it still wouldn't prove that Super Pi is a good metric to go by. Again, the Super Pi benchmark results are virtually the same between my 10980XE and the 9900K. Yet, for gaming, the latter is considerably better in most cases. While average frame rates may report the same, getting into the lows, maximums and frame times, you'll see that the 10980XE is in some cases, vastly inferior to the 9900K. In other words, if all I get out of you two is an average frame rate for each system, I won't be convinced because that by itself is virtually meaningless.

i actually test performance for a living and I don't limit my scope to a single ancient game or ancient hardware. I don't judge hardware's viability for gaming by a benchmark that has nothing to do with gaming that wasn't designed as a metric for gaming performance in the first place.

Dan, you have an opportunity here to be schooled and learn something new, or join the population described by "some people you just can't reach".

The thread topic is about a single old-ass game on new processors, if you don't want to contribute to that, stop crapping on this thread. And for the love of God, stop it with the "SPI is not a good general gaming benchmark" -- YOU are the only one talking about that "claim" as I certainly never made it. You do understand there is a difference between "general" and "specific", don't you? Words mean things.

-Mike
 
Well, It looks like the data is with TXE36.

But now I'm dying to know what a Ryzen could do.
 
Dan, you have an opportunity here to be schooled and learn something new, or join the population described by "some people you just can't reach".

I did indeed learn something. The game in question is an odd data point for sure. It's a definite outlier and the result was interesting. I predicted results and was proven wrong.

The thread topic is about a single old-ass game on new processors, if you don't want to contribute to that, stop crapping on this thread.

While I learned that I was wrong about an old ass game, (it's a badly written train wreck, apparently) you need to learn to be less condescending or you won't last long around here.

And for the love of God, stop it with the "SPI is not a good general gaming benchmark" -- YOU are the only one talking about that "claim" as I certainly never made it. You do understand there is a difference between "general" and "specific", don't you? Words mean things.

You are literally the one who brought up Super Pi as a benchmark as a reference to prove that CPU's haven't improved over the years in terms of single-threaded performance. You used it as a basis to determine that a CPU upgrade isn't worth while for a game. That's you equating Super Pi as a predictor of gaming performance and it isn't. You brought up Super Pi and made the connection to gaming. It's right below:

So, on multicore stuff, definite improvement in performance and cost since the lowly 2600K circa 2011. However, on single core, there really doesn't seem to be much improvement at all. I've got a 2600K on an Asus Maximus IV Extreme with 16GB of Samsung low latency DDR-1600 memory that can do 2133. I looked around for some SuperPI32 benchmarks on the web and benchmarked my machine as well as my Dell 8700 at work:

This is what you wrote. You brought Super Pi into this as a benchmark and drew conclusions from it about gaming performance.

Well, It looks like the data is with TXE36.

But now I'm dying to know what a Ryzen could do.

Yes, I was wrong concerning my predictions about how the game would perform on modern hardware. I too am curious as to how Ryzen would perform here. It has slightly better IPC than Intel does right now and a very different architecture.
 
I did indeed learn something. The game in question is an odd data point for sure. It's a definite outlier and the result was interesting. I predicted results and was proven wrong.



While I learned that I was wrong about an old ass game, (it's a badly written train wreck, apparently) you need to learn to be less condescending or you won't last long around here.



You are literally the one who brought up Super Pi as a benchmark as a reference to prove that CPU's haven't improved over the years in terms of single-threaded performance. You used it as a basis to determine that a CPU upgrade isn't worth while for a game. That's you equating Super Pi as a predictor of gaming performance and it isn't. You brought up Super Pi and made the connection to gaming. It's right below:



This is what you wrote. You brought Super Pi into this as a benchmark and drew conclusions from it about gaming performance.



Yes, I was wrong concerning my predictions about how the game would perform on modern hardware. I too am curious as to how Ryzen would perform here. It has slightly better IPC than Intel does right now and a very different architecture.

Takes a big man to eat crow,

Kudos .

Is this game free? I could try it on a 3600 and 5700xt
 
Having a hypothesis and testing it, then interpreting results is what scientists do. That's actually the normal flow, even if sometimes you are surprised or incorrect with the original hypothesis.

Rookie mistake to go to the mat on your expected experimental outcome before the experiment is complete. Waiting 24 hours here would have saved a lot of face.

It is surprising to me as well that an application found a way to scale so utterly poorly. I think I know where to send a resume for some contract work... ;)

I really don't think you want to do that. Auran which became NV3Games is fundamentally dysfunctional and likely does not want to be fixed. TRS19 is all about renting Trainz and DLC today.

-Mike
 
Rookie mistake to go to the mat on your expected experimental outcome before the experiment is complete. Waiting 24 hours here would have saved a lot of face.

I don't understand the confrontational tone.

There's no "mat" he's going to and no face to save. He made a prediction based upon his significant experience. Nobody is 100%. I agree with his point - this is an outlier.
 
I did indeed learn something. The game in question is an odd data point for sure. It's a definite outlier and the result was interesting. I predicted results and was proven wrong.



While I learned that I was wrong about an old ass game, (it's a badly written train wreck, apparently) you need to learn to be less condescending or you won't last long around here.



You are literally the one who brought up Super Pi as a benchmark as a reference to prove that CPU's haven't improved over the years in terms of single-threaded performance. You used it as a basis to determine that a CPU upgrade isn't worth while for a game. That's you equating Super Pi as a predictor of gaming performance and it isn't. You brought up Super Pi and made the connection to gaming. It's right below:

So, on multicore stuff, definite improvement in performance and cost since the lowly 2600K circa 2011. However, on single core, there really doesn't seem to be much improvement at all. I've got a 2600K on an Asus Maximus IV Extreme with 16GB of Samsung low latency DDR-1600 memory that can do 2133. I looked around for some SuperPI32 benchmarks on the web and benchmarked my machine as well as my Dell 8700 at work:

This is what you wrote. You brought Super Pi into this as a benchmark and drew conclusions from it about gaming performance.

Trainz has always been cringe-worthy, but it is a niche market without much choice. Everyone has those games they love to hate.

Condensending? I thought this was the forums of [H]ardOCP. Toughen up buttercup and be more careful about arguing from a base of ignorance (your unfamiliarity with TS12) - I actually gave you several opportunities to save face.

Also, please don't quote me out of context, this is the context of that statement right in the OP:

There has been a lot of hype lately surrounding the new Ryzens "beating Intel", but after reading several reviews of both AMD and Intel offerings I can't help thinking "all sizzle and no steak". This is with a few personal caveats:

1) I generally don't play modern games, and only one that I do play that is multicore won't max out a 4c/8t CPU.
2) Games I do play tend to be simulations and are single thread heavy.
- in my experience, the 32M SuperPI bench below tracks differences quite nicely.
3) Generally don't do multicore stuff.

So, on multicore stuff, definite improvement in performance and cost since the lowly 2600K circa 2011. However, on single core, there really doesn't seem to be much improvement at all. I've got a 2600K on an Asus Maximus IV Extreme with 16GB of Samsung low latency DDR-1600 memory that can do 2133. I looked around for some SuperPI32 benchmarks on the web and benchmarked my machine as well as my Dell 8700 at work:

Single thread heavy has always been a part of this thread and several of the early commenters recognized this.

Yes, I was wrong concerning my predictions about how the game would perform on modern hardware. I too am curious as to how Ryzen would perform here. It has slightly better IPC than Intel does right now and a very different architecture.

Thank you and I'm curious about Ryzen too. Although I don't expect much because if it was a big deal I'd expect AMD and AMD fanboi's to be shouting it from rooftops. Haven't heard much, but admittedly have not researched it all that thoroughly.

These new processors are a new normal compared to historical trends. Seems like two steps forward, once step back. Yes, the new offerings from Intel and AMD can be much faster, but that requires new software that takes advantage of threads as well as new instructions. A hallmark of the PC has always been backwards compatibility. This kind of breaks that, in most of the old stuff will still run, but it potentially is not going to enjoy as nearly as significant performance boost.

BTW, two other games that I believe behave like TS12 are FSX and Grand Prix Legends. Haven't benched either in quite some time, but it is what I recall. May have run the same experiment on FSX as TS12. GPL had a fast enough CPU when the 3.8G overclocked Prescott came along, IIRC.

Can you suggest any games/apps that are single threaded that are faster on the new processors than Sandy Bridge at equivalent clock rates with equivalent video cards/settings?

The personally sad thing for me is if Keljian's numbers had been great, I'd be at Microcenter right now.

-Mike
 
I don't understand the confrontational tone.

There's no "mat" he's going to and no face to save. He made a prediction based upon his significant experience. Nobody is 100%. I agree with his point - this is an outlier.

Never questioned his significant experience beyond that he was unfamiliar with this title. Absolutely, positively asserting that there is no way SuperPI can possibly predict a game outcome pretty much looked like the mat to me especially with an experiment in the works that would have proved the point one way or another. A little patience could have gone a long way here.

Real science doesn't care about feelings and assumptions about things you don't know can bite.

-Mike
 
I’m going to reiterate a few things:
1. The game is multithreaded(if minimally), and it has different loops, it most likely is not using pi much of at all - thus you cannot use superpi results as an analog for it, I do not care what you are saying, I refuse to believe it.

2. Crappy game engines are crappy, you cannot blame hardware performance based on an engine that limits itself. It’s like saying a 90 year old can drive an F1 like a 20 something F1 driver, they cannot.

The game mentions 3dnow for crying out loud.. (which incidentally I believe Ryzen may unofficially support)

3. I am happy to run whatever tests you want, but I will do them when/as I have time, in the name of science. In future please put the test requirements in a post by themselves, so it is easier to jump into this thread, grab them, and run them

4. Peg back the confrontational tone and posturing, and references to superpi, both are unnecessary to this thread. You believe something, we believe something else, it looks like neither of us are going to change our opinions, therefore no point arguing
 
Last edited:
  • Like
Reactions: Dan_D
like this
I’m going to reiterate a few things:
1. The game is multithreaded(if minimally), and it has different loops, it most likely is not using pi much of at all - thus you cannot use superpi results as an analog for it, I do not care what you are saying, I refuse to believe it

2. Crappy game engines are crappy, you cannot blame hardware performance based on an engine that limits itself. It’s like saying a 90 year old can drive an F1 like a 20 something F1 driver, they cannot.

The game mentions 3dnow for crying out loud.. (which incidentally I believe Ryzen may unofficially support)

3. I am happy to run whatever tests you want, but I will do them when/as I have time, in the name of science. In future please put the test requirements in a post by themselves, so it is easier to jump into this thread, grab them, and run them

4. Peg back the confrontational tone and posturing, and references to superpi, both are unnecessary to this thread.

I could give it a go on a 3900X / 1080 later tonight. Looks like its still $2.50 on amazon, I can throw the code into steam.

I think its very possible that performance in this one benchmark could correlate to performance in TS12, but I also think the differing quantities of CPU utilization might point to a/an underlying limitation(s) that may or may not be in common among both pieces of software.

There's clearly much more that goes on inside a cpu than I personally understand, but I would rely on a wide variety of CPU benchmarks before drawing conclusions about generalized IPC improvement.

Even if say the 3900X doubled in the frame rate, to me it proves not much of anything on the whole IPC debate one way or the other.
 
Last edited:
3900X with PBO on Fclk to 1900
32Gb 3800 - C14
GTX 1080

SuperPI seems to hop cores between loops, but average clock speed is about 4.2Ghz

SuperPI 32M Loop 24 time: 8m 21s (501s)
CPU usage 4% One thread

Following the instructions from earlier: 34 FPS, occasionally 33 per Geforce overlay
CPU usage 5.5-6% Most of one thread parts of couple others.

IMO:
Seems like the most inconsequential test ever. Reminds me of maxing out chunks for minecraft or something. Game engine isn't made to pull it off efficiently.

*edit* set SuperPI to affinity of my fastest core from Ryzen Master and reran. 8m29s. Was steady at 4.3Ghz.
 
Last edited:
3900X with PBO on Fclk to 1900
32Gb 3800 - C14
GTX 1080

SuperPI seems to hop cores between loops, but average clock speed is about 4.2Ghz

SuperPI 32M Loop 24 time: 8m 21s (501s)
CPU usage 4% One thread

Following the instructions from earlier: 34 FPS, occasionally 33 per Geforce overlay
CPU usage 5.5-6% Most of one thread parts of couple others.

IMO:
Seems like the most inconsequential test ever. Reminds me of maxing out chunks for minecraft or something. Game engine isn't made to pull it off efficiently.

No, that's actually a pretty impressive result on AMDs part. It's something Intel has not done since at least the P4 days. SUPERPI has tracked Trainz performance since TRS06 running on a P4 Northwood. AMD looks to have some secret sauce in there that Intel doesn't have even in the 9900K.

Just to be sure, Draw Distance set to 4000M and all in game graphic sliders through anisotropy to the right? Hate to ask, but a a reduced draw distance will significantly increase frame rates.

Really appreciate you doing this.

I don't think it is inconsequential that AMD is beating Intel single threaded here at a 19% lower clock rate.

The whole reason I asked the original question was wondering if Ryzen would break this known to me relationship between SuperPI, TS12, and Intel x86. Wouldn't surprise me in the least if this behavior extends to other, older, single threaded games such as FSX. Have to test to know for sure.

-Mike
 
No, that's actually a pretty impressive result on AMDs part. It's something Intel has not done since at least the P4 days. SUPERPI has tracked Trainz performance since TRS06 running on a P4 Northwood. AMD looks to have some secret sauce in there that Intel doesn't have even in the 9900K.

Just to be sure, Draw Distance set to 4000M and all in game graphic sliders through anisotropy to the right? Hate to ask, but a a reduced draw distance will significantly increase frame rates.

Really appreciate you doing this.

I don't think it is inconsequential that AMD is beating Intel single threaded here at a 19% lower clock rate.

The whole reason I asked the original question was wondering if Ryzen would break this known to me relationship between SuperPI, TS12, and Intel x86. Wouldn't surprise me in the least if this behavior extends to other, older, single threaded games such as FSX. Have to test to know for sure.

-Mike

Yup I double checked, same result twice.

By inconsequential I mean to say I don't think you can really draw a conclusion either way. AMD did lose in the SuperPI by a nice margin.

Could be the difference in memory spec, it's hard to know.
 
I’m going to reiterate a few things:
1. The game is multithreaded(if minimally), and it has different loops, it most likely is not using pi much of at all - thus you cannot use superpi results as an analog for it, I do not care what you are saying, I refuse to believe it.

Well there is a big source of confusion!

It has never been claimed that SuperPI scores can predict changes to TS12 CPU limited framerate performance because TS12 actually calculates or uses PI. PI is a constant just like any other constant to the processor. About the only time computers calculate PI is for benchmarking. The only thing really common between these programs is they are both running some x86 code on windows, likely without any of the new fancy instructions introduced after 2010 or so.

Yes, TS12 is "multi-threaded", but it is really crappy, much like "multi-threaded" patched FSX, one thread still does the bulk of the work. The earlier versions were single threaded, TS12 was an incremental change. TANE was a huge change and required a re-write of the game engine.

2. Crappy game engines are crappy, you cannot blame hardware performance based on an engine that limits itself. It’s like saying a 90 year old can drive an F1 like a 20 something F1 driver, they cannot.

The point isn't to "blame hardware", the point is I have an application that I run on some hardware. I get an upgrade itch. Will the new hardware run my application faster/better or not? If it doesn't, and that particular application is driving the upgrade, the wallet stays closed.

Care about a different application, YMMV - Your Mileage May Vary see: https://hardforum.com/threads/upgraded-from-2500k-to-3700x-mind-blown.1988627/#post-1044407558 and what happens if one cares about Overwatch.

Which games benefit, which ones don't? Very open question, lots of legacy games out there. Without testing, drawing conclusions is a bit dicey.

3. I am happy to run whatever tests you want, but I will do them when/as I have time, in the name of science. In future please put the test requirements in a post by themselves, so it is easier to jump into this thread, grab them, and run them

Probably, but it will likely be next week with my schedule before I can create dynamic tests. Static high video may be available in a day or so, but no promises.

4. Peg back the confrontational tone and posturing, and references to superpi, both are unnecessary to this thread. You believe something, we believe something else, it looks like neither of us are going to change our opinions, therefore no point arguing

Civility is a two way street. I certainly didn't feel any love coming from Dan. I'm getting too old for internet peeing races.

You do realize "superpi" was in my opening post that started the thread? The term is perfectly relevant here and we can agree or disagree about its importance. I want to see the data, not politics.

-Mike
 
Yup I double checked, same result twice.

By inconsequential I mean to say I don't think you can really draw a conclusion either way. AMD did lose in the SuperPI by a nice margin.

Could be the difference in memory spec, it's hard to know.

Very cool. I think what may be going on here is the ratio of raw SPI score to TS12 CPU limited frame rate has changed. The way to tell would be to run another Ryzen3 at a different clock rate or cache size or something else that could cause a 20% or so change in SPI score. Then run the TS12 test on that system and see if the frame rate tracks. I'm pretty sure it would, but wouldn't say for sure without testing.

Can you easily knock the clockrate down of that 3900x by 20-25% and retest?

On the Intel side of things, my observations are that memory induced changes to SPI results track into TS12 results just like CPU clock. Spending more on DDR2-1000 when I upgraded to Core2Duo was driven by SPI testing and wanting the best TS12 results possible. At the time DDR2-1000 beat out DDR3 because DDR3 was new and the latency was too high for the clock rates available. For a while you could pick between DDR2 or DDR3 motherboards. And yes, differences in performance from memory were pretty small, amounting to about a "bump" to the next CPU multiplier.

-Mike
 
Declocked to 3.6, everything else the same.

Looked like it was holding steady on 30. I moved the mouse it jumped to 32. I determined that when the mouse is over menus I have 32, over anything else 30.

I reverted back to old settings ran the test again. Similar problem, 36 with the cursor over menus, 34 over anything else.

I believe the core argument here is, does SuperPI performance = TS12 performance. Based on my results, no.

SuperPI I'm certain uses a relatively simple algorithm to calculate digits of Pi, TS12 I'm sure is much different, as its an entire game. I don't think it's fair to lump them together because they are old or appear to be single threaded (TS12 seems to not be to some degree) - even without my results.

I believe that is what Keljian is getting at.

That's why I use passmark, heaven or firestrike benchmarks to determine game performance. The subcategories of those benchmarks that have CPU tests are more comprehensive and proven than SuperPi or a literal handful of frames in an aging game. Low single digit frame gains are margin of error for most reviewers for a reason.
 
Last edited:
This thread sparked a bit of curiosity from me in how far we've come overall from the 2600k in actual performance (not limited to super pi...) Would be cool if the data went back a full 10 years to the i7 920 days, but OP mentioned the 2600k specifically. I just say that because I recall the 2600k destroying first gen i7s.

this video from GN last year was quite an interesting watch:

I'm sure they've since added the new ryzen CPUs into their giant repository of benchmark data, but I'm not quite sure how to find it. Would be nice to have them in these same charts.
 
This thread sparked a bit of curiosity from me in how far we've come overall from the 2600k in actual performance (not limited to super pi...) Would be cool if the data went back a full 10 years to the i7 920 days, but OP mentioned the 2600k specifically. I just say that because I recall the 2600k destroying first gen i7s.

this video from GN last year was quite an interesting watch:

I'm sure they've since added the new ryzen CPUs into their giant repository of benchmark data, but I'm not quite sure how to find it. Would be nice to have them in these same charts.


Yep his is even faster at 5.0 I'd guess roughly right in there with the Ryzen 2nd gen at 4.2 in games if I had to guess.

Much of the Ryzen 3rd gen reviews have the same Rzyen 2nd gen at 4.2 on the charts so you can get a good idea where good old sandy is still hanging out.
 
The reason intel stuff tracks with memory performance is textures are shunted to and from memory, and things are decompressed/arranged.

The AMD results suggest due to the fact there are more per core execution units, there is less memory load in the loop as things are shuffled out to cache/memory

The reason an old game engine would do this is I/O, disk access for spinning disks is slow, from memory it is not, therefore it makes sense to load memory up with compressed textures and models, then rapidly decompress them on the fly.

We have come a long way in I/O since the early versions of this software, so there is less need to do what it is trying to do, moreover there are more cores to shunt things to now to deal with decompression and other things.

Note AMD results were on ram at 3800c14, mine/intel was at 3200c16 (I have not tweaked my ram)

In this particular use case I speculate memory bandwidth, latency and speed all count. This would seem to point to a ryzen 3950x with PBO, with four sticks of fast, low latency memory being the optimal for this particular software.

Estimated results with that setup? Rough math gives 10% faster ram (possibly more if you can get it), 10% faster single thread, call it 15% overall, 15% on 35 with some change = Around 40-42 FPS.
 
Last edited:
Declocked to 3.6, everything else the same.

Looked like it was holding steady on 30. I moved the mouse it jumped to 32. I determined that when the mouse is over menus I have 32, over anything else 30.

I reverted back to old settings ran the test again. Similar problem, 36 with the cursor over menus, 34 over anything else.

That is expected behavior and thanks for collecting this data.

I believe the core argument here is, does SuperPI performance = TS12 performance. Based on my results, no.

No, actually that is not the argument and this is a key point, the core argument is about ratios. Let's simplify the situation to just changing clock rates on one system. If I run SPI at 3.6G and 4.2G the ratio of the performance change of SPI will track the performance change of TS12 at 3.6G and 4.2G. We need one more number from your system, what is the SPI score of your system at 3.6G.

This is the data from your system from your previous posts:

3.6G SPI32 = ??? TS12 = 30-32 FPS
4.2G SPI32 = 501s TS12 = 33-34 FPS

We need that 3.6G SPI32 score to see if the estimate tracks as well as it did for Intel. I'm thinking what the switch of architectures did is change the absolute relationship between SPI and TS12 scores. Keep in mind that this method of estimating has never relied on the absolute relationship, but the ratio of the change of the scores.

What I showed with data is that given an Intel system SPI score, I can predict the FPS of CPU limited TS12 on that system.

What your data showed is that given a Ryzen system SPI score, I cannot predict the FPS of a CPU limited TS12 on that system...yet. The "yet" part is seeing what is possible with that 3.6G SPI32 score from above.

While we are waiting for that score, I'm going to take a stab at estimating based on what we know now. Going from 3.6 to 4.7G on my 2600K yielded an improvement in SPI scores of 25.9%. The clock rate change going from 3.6 to 4.7G is 30.6%. Note getting all the 30.6% in the SPI scores makes perfect sense as the only changing in the CPU clock, so the other stuff like memory is running at the same speed. 25.9%/30.6% means I got 0.846 of the improvement SPI scores.

You clock change is 16.7%. If I assume your change in SPI score is 0.846 of that, or 14.1%, then estimate the 4.2G TS12 score as 31FPS + 0.141*31FPS = 35.3 FPS, which is in line with your 33-34 FPS result reported above.

I think the relationship will hold when you get that SPI score. I'm guessing that score will be about 572.

The data will tell in the end.

SuperPI I'm certain uses a relatively simple algorithm to calculate digits of Pi, TS12 I'm sure is much different, as its an entire game. I don't think it's fair to lump them together because they are old or appear to be single threaded (TS12 seems to not be to some degree) - even without my results.

I believe that is what Keljian is getting at.
I think he is looking from the inside out and I'm looking from the outside in. There is nothing wrong with either point of view, but in my opinion, then inside is pretty complex to work with and perhaps too theoretical. My argument is lump them together because they are both similar vintage X86 code. Whatever AMD or Intel does with a modern processor, it has to take the unchanged code and spit out correct results, i.e. input bits = output bits. I don't think how the sausage is made is going to matter all that much because of the code compatibility.

That's why I use passmark, heaven or firestrike benchmarks to determine game performance. The subcategories of those benchmarks that have CPU tests are more comprehensive and proven than SuperPi or a literal handful of frames in an aging game. Low single digit frame gains are margin of error for most reviewers for a reason.

There are lies, damned lies, and benchmarks. Generally speaking the best benchmark is the actual game/application you are going to use. A typical problem is benchmarks for your game/application aren't available so you go looking for stand-ins. Many years ago I noticed this little SPI benchmark tracked this goofy trains program I was using when the CPU maxed out.

As a benchmark SPI has some very good characteristics. It is reasonably fast. It is very repeatable. It is simple to run. Results are simple. Results also have limited meaning and application. I've only seen it applicable to this game (and I suspect other single thread heavy sims like FSX) when this game is CPU limited.

With gaming benchmarks, the whole GPU thing brings in complication.

I have used Heaven in benchmark comparisons, but that's tricky too because I'm not sure I'm running the exact same settings/driver as the BM publisher. I use Heaven in the context of CPU lightly loaded, GPU maxed out. We can add Heaven into this discussion/data collection too if you want -- I'm both game and curious.

-Mike
 
That is expected behavior and thanks for collecting this data.



No, actually that is not the argument and this is a key point, the core argument is about ratios. Let's simplify the situation to just changing clock rates on one system. If I run SPI at 3.6G and 4.2G the ratio of the performance change of SPI will track the performance change of TS12 at 3.6G and 4.2G. We need one more number from your system, what is the SPI score of your system at 3.6G.

What I showed with data is that given an Intel system SPI score, I can predict the FPS of CPU limited TS12 on that system.

This is precisely what I was talking about. You are using Super Pi to predict game performance. Whether you apply a ratio to the results or not is irrelevant. You are using it as a game benchmark in a round about and somewhat convoluted way. I guess this makes some sense as almost no one seems to have this game and many aren't going to drop the $3 on it just to satiate someone else's curiosity. Ordinarily I'd tell you this is a bad idea because different processors have different architectures. Most of the time I'd be right and most of the time, you wouldn't want to use a third party tool to predict game performance because it simply doesn't work.

I would still argue that benchmarking the actual game is the best way to go about determining its performance, but I understand no one has TS12. I had never even heard of it until this thread.

There is nothing wrong with either point of view, but in my opinion, then inside is pretty complex to work with and perhaps too theoretical. My argument is lump them together because they are both similar vintage X86 code. Whatever AMD or Intel does with a modern processor, it has to take the unchanged code and spit out correct results, i.e. input bits = output bits. I don't think how the sausage is made is going to matter all that much because of the code compatibility.

This is a gross over simplification of things. Being old x86 code doesn't generally mean that there is anything correlative between them. A game engine, even a CPU limited one doesn't calculate Pi. I think the only reason this works at all is because Intel's processors have largely been stagnant in terms of single-threaded performance. Even so, there may be differences between a 3770K and a 10980XE in this game because it is a game. Yet, both of these CPU's will clock about the same. Around 4.7GHz. An HEDT processor however, is generally less than ideal for gaming. There are increases in latency given their designs. The first HEDT processor not to suffer in this way is really the third generation Threadripper.

Furthermore, using a third party application as a predictor of game performance will almost always require some sort of math and a lot of data to gather up front to be put to use. I'd still argue there is going to be a big margin of error for this as the relationships aren't likely to be 1:1. For example, as I said, I bet that if I ran TS12 on the 10980XE, it would run it like crap. Sure, at 4.7GHz it's no slouch but its slower than a 9900K at gaming. It's almost worth the $3 for me to find that out. I could be wrong, but it would be an interesting test. Of course, the 10980XE and HEDT processors for a game like this would still be an outlier at best.

You mentioned trying to get a baseline for Ryzen processors on TS12 so you could use Super Pi to predict their results. This will have a higher margin of error than Intel CPU's generally will due ot their configuration and architectural changes being much more meaningful than Intel's have been. For example, Zen and Zen+ CPU's may correlate well enough together but Zen 2 is a different beast. While Zen 2 is a descendant of Zen and Zen +, allot has changed between them. Cache design, the Infinity Fabric and so on. However, as I found out the hard way, Threadripper CPU's (1st and 2nd generation) are kind of bad at gaming. While their Super Pi scores will be similar to that of their mainstream counterparts, they suck at gaming due to their NUMA architecture and the amount of latency introduced when crossing CCX complexes inside the CPU.

A third generation Threadripper would be a different beast as well.

There are lies, damned lies, and benchmarks. Generally speaking the best benchmark is the actual game/application you are going to use. A typical problem is benchmarks for your game/application aren't available so you go looking for stand-ins. Many years ago I noticed this little SPI benchmark tracked this goofy trains program I was using when the CPU maxed out.

This is something we can absolutely agree on. Taking this concept further, any third party benchmark, such as 3D Mark or Heaven aren't actually games themselves. Nor are they accurate predictors of what you can expect in any game beyond the vaguest connection. Basically, if you get a really high 3D Mark score, chances are you can run any given game well. But as I said before, you can't say 11,000 3D Marks equals 120FPS in Destiny 2. The correlation between the two things just isn't there. Such benchmarks are often better as stress testers than they are anything else.

As a benchmark SPI has some very good characteristics. It is reasonably fast. It is very repeatable. It is simple to run. Results are simple. Results also have limited meaning and application. I've only seen it applicable to this game (and I suspect other single thread heavy sims like FSX) when this game is CPU limited.

With gaming benchmarks, the whole GPU thing brings in complication.

I have used Heaven in benchmark comparisons, but that's tricky too because I'm not sure I'm running the exact same settings/driver as the BM publisher. I use Heaven in the context of CPU lightly loaded, GPU maxed out. We can add Heaven into this discussion/data collection too if you want -- I'm both game and curious.

-Mike

A few things here. Yes, Super Pi is all that you said it is and generally meaningless as it relates to games. This has been my point from the beginning. The fact that you can derive anything from it in the realm of any game, CPU limited or not is impressive. I still think it's a poor predictor even for TS12 due to my examples above. Again, I get why your doing it. No one has TS12.

The other thing I'd like to mention, is the orignal Crysis. Its actually somewhat similar. Although, lightly multi-threaded, it's no better on a 9900K than it is a 2600K or even older processors. When Crysis came out, almost all CPU's were single core. Dual core CPU's were relatively new then. The Crytek engines have since been improved dramatically, but Crysis one can still bring a system to its knees today if you crank up the settings and run it at 4K.

You mentioned bringing GPU's into the equation and complicating things. While its true that the GPU is more important than the CPU as it relates to gaming, the CPU is still critical. I learned this the hard way when I tried to run Destiny 2 at 4K on a Threadripper 2920X. It was terrible because my minimum frame rates and frame times were outright garbage. A 9900K on the other hand was considerably better. The difference in minimums were 26FPS (2920X) and 56FPS (9900K). The TR chip had much higher maximum FPS and much lower minimums. It's much the same with a 10980XE with behavior that's not unlike the older Threadripper. The point being, GPU does perhaps complicate things, but your CPU will still have a massive impact on your frame rates and more importantly, your frame times.

The "wisdom" that your GPU is far more important than your CPU is still true, but it doesn't tell the whole story. Effectively, for most people gaming at 1080P with a decent graphics card, it won't make much of a difference. That is, 170FPS is as good as 200FPS for practical purposes. However, its when you turn down settings and take the GPU out of the equation where the difference shows. On the opposite end of the spectrum, at 4K, your CPU is more important than you'd think as low minimums are something you'll both see and feel quite often.

You can bring Heaven into the equation all you want, but I'm not sure it will help you in anyway where it comes to TS12. However, if you think it will, I have LOTS of Heaven results from a lot of processors.
 
Last edited:
Hold on, correct me if I'm wrong:

^^^^^ Pretty much what SPI32 predicted, if I may
My Loop 24 32M SPI at 4.7G 1866 CAS 9 memory: 411.077s. My TS12 Test Result: 32FPS
Your Loop 24 32M SPI at 5.0G and your memory: 424.57s. Your TS12 Test Result: 30-31FPS

.....

No, actually that is not the argument and this is a key point, the core argument is about ratios. Let's simplify the situation to just changing clock rates on one system. If I run SPI at 3.6G and 4.2G the ratio of the performance change of SPI will track the performance change of TS12 at 3.6G and 4.2G. We need one more number from your system, what is the SPI score of your system at 3.6G.

-Mike


Based on the original argument you reasoned that AMD/9th Gen Intel would not perform better in TS12 because SuperPi scores are not better than sandy bridge.

Our differing SuperPi scores - SB ~400s vs AMD ~500s
If I hadn't tested the game you would have reasonably assumed that AMD would perform at 4/5th the performance of your SB (~25fps) - That is the same deduction you made on on the tested 9900K. 9900K had slightly worse SuperPi, slightly worse TS12.

"^^^^^ Pretty much what SPI32 predicted, if I may
My Loop 24 32M SPI at 4.7G 1866 CAS 9 memory: 411.077s. My TS12 Test Result: 32FPS
Your Loop 24 32M SPI at 5.0G and your memory: 424.57s. Your TS12 Test Result: 30-31FPS"

Instead AMD performed better in TS12, meaning there is not a direct realtionship between SuperPI score and TS12.

AMD had a worse SPI than SB and yet had a higher TS12 than SB. You therefore cannot use SPI to deduce TS12 score.

Of course declocking will worsen SuperPI within the same CPU. The question was, does worse SuperPI between two different CPUs translate similarly to TS12.

Thus the only confirmed benchmark for TS12 is TS12 itself. Maybe that used to not be the case.

I certainly wouldn't deduce any IPC generalizations either.
 
Last edited:
The question was, does worse SuperPI between two different CPUs translate similarly to TS12.

It does, since he predicted the results from different Intel CPUs. The prediction for the AMD was incorrect tho, which means that SuperPi is no longer (or was never) valid.
 
It does, since he predicted the results from different Intel CPUs. The prediction for the AMD was incorrect tho, which means that SuperPi is no longer (or was never) valid.

Yes correct (though with the cursor placement issue I saw, we may want re-verify)

I believe this is ultimately the conclusion here.
 
It does, since he predicted the results from different Intel CPUs. The prediction for the AMD was incorrect tho, which means that SuperPi is no longer (or was never) valid.

Yes correct (though with the cursor placement issue I saw, we may want re-verify)

I believe this is ultimately the conclusion here.

And that's been my argument. Because of many other variables, you cannot use Super Pi to predict game performance. While it may work for mainstream Intel CPU's between Sandy Bridge and Coffee Lake, that goes out the window when you add HEDT Intel CPU's or AMD CPU's of any kind into the mix.

Simply put, I think he's trying to make his theory fit the evidence rather than the other way around. His initial conclusion didn't sound right to me, but the data did seem to back it up even if a convoluted method had to be devised to make it work. However, Ryzen and other processors present a problem in that they don't achieve the same Super Pi results as Intel CPU's do. Even if you devise some sort of math to help you determine a means of predicting TS12 performance with AMD CPU's, without verification its useless. Even if you do so for the mainstream, its going to differ between different Ryzen generations. Then there are HEDT parts. As I said, HEDT processors in general aren't as good for games for a variety of reasons. Clock speeds aside, you have a NUMA based design, Mesh bus, cross CCX latency issues, and so on.
 
s. Then there are HEDT parts. As I said, HEDT processors in general aren't as good for games for a variety of reasons. Clock speeds aside, you have a NUMA based design, Mesh bus, cross CCX latency issues, and so on.

Dan_D I speculate HEDT will perform better in TS12 due to a greater number of memory channels if all other factors are the same.

That said with Mesh/CCX etc - I think intel may win out on HEDT


Regardless, if one was looking for the "ultimate performance for money in this particular use case" I would lean very heavily to the upper range of AMD's Desktop offerings and some screaming fast memory.
 
Last edited:
Simply put, I think he's trying to make his theory fit the evidence rather than the other way around. His initial conclusion didn't sound right to me, but the data did seem to back it up even if a convoluted method had to be devised to make it work.

Making the theory fit the evidence is a lot better than making the "evidence" fit the theory. I put the second evidence there in quotes because technically that technique is making crap up. Making theory fit the evidence is science - it's actually the whole point of science.

Ignore or disbelieve the data at your peril. If you really don't believe the data pick at the data itself and invalidate it if possible. If you can't invalidate the data, prehaps rethink what may be going on. The data I've presented here is very simple, very repeatable, and very solid. Not believing data can be akin to a "Who are you going to believe? Me or your lying eyes" situation. "Lying eyes" is the data, "me" is the current theory *not* fitting the data.

Also, the method is not convoluted at all if one backs away from all the inside complication, treats the machine like a black box and compares execution time of one thing to another keeping everything constant except the particular black box. Perhaps the difficulty you are having with my approach is a "can't see the forest for the trees problem"?

Based on your SPI score of 380s (6m 20s) on both of your machines, I predict your TS12 FPS will be 34.6 FPS. It will cost you two fitty to find out. Run the TS12 bench, get the data.

Math: My SPI 411s, My TS12 FPS 32. Your SPI 380s . Your SPI score is 8.16% faster than mine. 32FPS + 8.16% * 32FPS = 34.6FPS.

I've actually got an old P4 Prescott that ran 3.8G around here with a PCIe slot. If it still works, I'll present those results as well.

-Mike
 
OK, I picked up Trainz Simulator 12 from Amazon ($2.50 here - unfortunately it is not Steam) - how do I run the benchmark? I can't find anything anywhere within the game or game files and even the Wiki and Google don't seem to have anything...did I buy the wrong TS12, lol?

I am such a bench whore.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
This is the TS12 bench referred to earlier, lets call this TS12 Bench 1:

https://www.techpowerup.com/download/super-pi/

Grab Super Pi Mod v1.5 XS and run the 32M version. Note the loop 24 score.

For consistency, use the SuperPi from here https://www.techpowerup.com/download/super-pi/

Keep the video card as fast as possible, don't enable any extra anti-aliasing settings, the goal is to see how the CPU is limiting frames. No vsync either to cap frames. Make sure the settings in the video driver for the game are at defaults (but no vsync). I'm using Win7 64 and Nvidia driver 436.02, but I don't think that will matter.

1) Start the game and click Options
2) Don't need to change General or Planet Auran Tabs.
3) On the Display Settings tab select Directx, 1920x1080, 32 bit, fullscreen, Aspect Ratio Auto, Antialias Mode 2
4) On the Advanced Options tab select Vertical Sync Auto, Frequency Auto, and uncheck Shadows.
5) On the Developer Tab set the Asset Backups to 0
6) Click Ok
7) Click Start (The first time you run this it will do a database rebuild)
8) Click Select Route
9) Select Route Norfolk & Western - Appalachian Coal
10) Select Eastbound Coal Train
11) Close the dialog boxes
12) Go into the menu in the upper left corner
13) Select video settings and Set them to:
***Max Draw Distance: 4000m
***Scenery Detail: High
***Tree Detail: Ultra
***Texture Detail: High
***Anisotropy: 16 Highest
***Close the settings
14) Wait for the opening scene to fully load. Don't change the view or operate any controls - just wait.
15) Once the view stops populating, what is your frame rate? What is your video card load? What was your loop24 SPI32M score?
16) Also note your CPU and the clock rate it was operating at. Memory details optional, motherboard probably won't matter, but feel free to report as much as you want for completeness.
 
OK, I picked up Trainz Simulator 12 from Amazon ($2.50 here - unfortunately it is not Steam) - how do I run the benchmark? I can't find anything anywhere within the game or game files and even the Wiki and Google don't seem to have anything...did I buy the wrong TS12, lol?

I am such a bench whore.

Posted above. There is no built in bench for TS12. I defined the one we have used so far (there is only one) so we are testing the same thing.

-Mike
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Making the theory fit the evidence is a lot better than making the "evidence" fit the theory. I put the second evidence there in quotes because technically that technique is making crap up. Making theory fit the evidence is science - it's actually the whole point of science.

No, you are ignoring variables. Potential variables that could alter the results of your method. Science is about making predictions, testing things and seeing if you were either right or wrong. Being wrong is often as useful or more useful than being right. Science is not about being a dick to someone else for questioning your methods or making predictions about variables you obviously haven't accounted for.

I could be 100% wrong and that the architectural differences I pointed out with HEDT and Threadripper CPUs in particular may mean nothing in this case. That would be interesting as that's not the case in virtually anything else. If you were actually scientifically minded, you wouldn't be so hostile when your ideas where challenged or someone wanted to expand the discussion a bit and make predictions about how various configurations will perform in a given scenario.

Ignore or disbelieve the data at your peril.

Yeah, because this is so perilous. :rolleyes: It reminds me of other perilous tasks such as organizing my sock drawer.

Why are you always so fucking dramatic?

If you really don't believe the data pick at the data itself and invalidate it if possible.

Apparently, reading comprehension isn't your strong suit. I never said I didn't believe the data. I agreed with you that Super Pi, can be used as a predictor in this outlier of a game as it relates to Intel CPU's. Again, I made a prediction that you couldn't do that, and I was proven wrong. I accepted that because that's what the data pointed to. However, I do not accept that blindly as an absolute because I can conceive of potential configurations or scenarios where your method may not hold up.

If you can't invalidate the data, prehaps rethink what may be going on.

I am not trying to validate or invalidate the data. I've simply theorized what scenarios your method of prediction might not work in and why using something besides the actual game to predict game performance isn't a good idea. That's all. I made another prediction based on what I know of CPU architectures and how their designs relate to gaming. If tested on those configurations, I will either be right, or wrong. I don't know.

The data I've presented here is very simple, very repeatable, and very solid.

I didn't say it wasn't.

Not believing data can be akin to a "Who are you going to believe? Me or your lying eyes" situation. "Lying eyes" is the data, "me" is the current theory *not* fitting the data.

Again, I didn't say I didn't believe the data. I question the wisdom of the methodology because I can conceive of potential scenarios where it won't work. My hypothesis on that hasn't been tested. Until someone wants to get this archaic piece of shit and put it on a 10980XE or different generations of Threadripper CPU's, it will remain untested.

I have a theory about how this game might perform on HEDT processors. Nothing more, nothing less. Why does this offend you so much?

Also, the method is not convoluted at all if one backs away from all the inside complication, treats the machine like a black box and compares execution time of one thing to another keeping everything constant except the particular black box. Perhaps the difficulty you are having with my approach is a "can't see the forest for the trees problem"?

Except, you can't treat machines that way. There are considerable differences in these CPU architectures and their platforms. Again, you may be able to predict the frame rates for a specific set of Intel CPU's, but I'm not convinced it would work in every case. I'll say it again for the cheap seats: HEDT processors and platforms are different from their mainstream counterparts. There are additional latency issues that come up as a result of their designs. Therefore, it's possible, though not guaranteed that TS12 may run worse on those CPU's despite Super Pi scores being virtually identical between a 10980XE and a 9900K.

Again, I'm simply pointing out a potential variable and making a prediction based on differences in various CPU designs. Nothing more, nothing less. I'm not disagreeing with your data or doing whatever it is you think I'm doing. I will say that I definitely think this is a convoluted way to find out how a game performs on different processors. It would simply be better to benchmark the actual game on different configurations rather than relying on Super Pi as a predictor. As I said, Super Pi is virtually the same on these two test machines, but they perform very differently in most tasks. Whether they would in TS12 is up for debate until someone tests it.

Based on your SPI score of 380s (6m 20s) on both of your machines, I predict your TS12 FPS will be 34.6 FPS. It will cost you two fitty to find out. Run the TS12 bench, get the data.

Math: My SPI 411s, My TS12 FPS 32. Your SPI 380s . Your SPI score is 8.16% faster than mine. 32FPS + 8.16% * 32FPS = 34.6FPS.

I've actually got an old P4 Prescott that ran 3.8G around here with a PCIe slot. If it still works, I'll present those results as well.

-Mike

It's $9.99 on Steam right now. I just checked. I'm curious about how it performs on the AMD Ryzen's and Intel HEDT processors I have, but my curiosity isn't worth $10. For the record, I don't doubt that your prediction would be fairly close to being accurate on my 9900K. On my 10980XE, I'm not so sure. And even if it was, Threadripper is a different animal. Especially the 1st and 2nd generation CPU's. They suffer horrendously in gaming due to their CCX and NUMA latency issues.

You talked about contributing to the discussion. I acknowledged your data was right, but further added that there are some potential situations where your method might not work and why. It's called a hypothesis. I made a prediction based on my knowledge of CPU architecture. If you want to stick to the scientific method, you could actually be reasonable and make a statement as to why or why not you agree or disagree with that prediction. Instead, you've been an ass at every turn because reasons...........? Well suck it up buttercup, people will challenge you all the time on this forum. If you don't like it, use the X button at the top of your browser and move on.

For some reason, you seem to take discussion and questions about the validity of the method as a personal attack. I can only guess that's why you are so hostile all the time. Instead of talking about the ideas that might run contrary to your current understanding you simply stick to your guns. You say you want discussion on this topic but the evidence suggests otherwise.
 
I can't believe after all this - there is no TS12 bench just a bunch of instructions and we are supposed to "eyeball" FPS!? LOL
 
It's $9.99 on Steam right now. I just checked. I'm curious about how it performs on the AMD Ryzen's and Intel HEDT processors I have, but my curiosity isn't worth $10. For the record, I don't doubt that your prediction would be fairly close to being accurate on my 9900K. On my 10980XE, I'm not so sure. And even if it was, Threadripper is a different animal. Especially the 1st and 2nd generation CPU's. They suffer horrendously in gaming due to their CCX and NUMA latency issues.

Y

https://www.amazon.com/N3V-40711imulator-121-Simulator-Download/dp/B0056JLU4A?tag=hardfocom-20

I think you may be able to get the key into steam or something Dan.. if you are that curious ($2.50)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I can't believe after all this - there is no TS12 bench just a bunch of instructions and we are supposed to "eyeball" FPS!? LOL

Yep. Evidently. You could always download Frameview from NVIDIA. I'd actually like to see what the people who have bought this game get. You know, actual results which include minimum, average, and maximum frame rates as well as frametimes. This is where I think the OP would see a greater difference between something like a 2600K and a 9900K. Perhaps not, but this would be infinitely more useful and accurate than BS involving Super Pi. I am still amazed the OP can use Super Pi and get anything useful out of it in relation to this game. However, as usual, this type of flawed testing method is incapable of telling the full story.

EDIT: The Amazon version key will not activate on Steam. However, I did download it and I'm going to run it on the 10980XE. I don't think there is much to be gained on my 9900K, but I might try it as I think there may be something different in the Frameview data that simply eyeballing the FPS doesn't indicate.

Prediction: It will run worse on the 10980XE than it does on the 9900K.
 
Last edited:
Good that you're running it. The 9900K and KS are so similar there's no point in me running it too. :)
 
Good that you're running it. The 9900K and KS are so similar there's no point in me running it too. :)

Yeah there kind of is, as you have a 2080ti, (though I don't know what memory you have) not to mention the 5960x
 
Man, if I had the x5675 with me I would give it a try, since I have very nice ddr3 2400 cl8 RAM. It's nowhere near 5.0ghz tho.
 
Back
Top