Post Your NEW 3DMark Sky Driver Benchmark Results

SixFootDuo

Supreme [H]ardness
Joined
Oct 5, 2004
Messages
5,825
Intel 4770K @ 4.5GHz ~ 125 bootstrap / 36 Multi ~ 2250mhz DDR3 w/ 2 x XFX 290x @ stock

PGOib0L.jpg
 
i7 970 @ 4ghz + everything in main rig in sig. Downclocked from 4.4ghz because it's so damn hot here.

BXtouHv.jpg
 
why is my combined sore higher then everyone's, but my other scores are lower?

 
r9 295x2 stock 1018Mhz this is with 14.4 on Win7x64 since I've just downgraded from win8.1... maybe I'll give cat 14.6 a go.
Capture_zpse18f4d17.png~original



aaaand... here's 14.6 pretty big improvement on the graphics score
Capture2_zps35df72c0.png~original

 
Last edited:
I think it does quite well for hardware as old as it is...lol my physics score hangs with cpus that are 6 years newer....modern cpus are a joke:)
Capture_zps188e1387.jpg
[/URL][/IMG]
 
Last edited:
Hmm, managed to pull 22686 points on a GTX 780, but but it looks like I'm a bit CPU limited. Link: http://www.3dmark.com/sd/2173893

In the Combined Test, there's enough of a CPU bottleneck that the GPU actually drops back to base (rather than boosted) 3D clocks. There literally wasn't enough GPU load for it to bother staying at 1200 MHz.

Confirmed by using K-boost to force the graphics card to maintain 1200MHz for the duration of all tests. Score and framerate didn't change, so it's a CPU bottleneck for sure.
 
Hmm, managed to pull 22686 points on a GTX 780, but but it looks like I'm a bit CPU limited. Link: http://www.3dmark.com/sd/2173893

In the Combined Test, there's enough of a CPU bottleneck that the GPU actually drops back to base (rather than boosted) 3D clocks. There literally wasn't enough GPU load for it to bother staying at 1200 MHz.

Confirmed by using K-boost to force the graphics card to maintain 1200MHz for the duration of all tests. Score and framerate didn't change, so it's a CPU bottleneck for sure.

lol wow your 780 barely beats my 7970......now thats funny:D...about time they come out with a test that pushes the cpu as well;)

edit: Physics Score 8244 lol yea your cpu bottlenecked bad
 
Here we go, with the right test. lol Got 27827 http://www.3dmark.com/3dm/3355765
We're running the same video driver, right down the the sub-version detected by 3DMark... and yet it detects yours as "approved" and mine as "unapproved"? :confused:

lol wow your 780 barely beats my 7970......now thats funny:D...
That's what happens when the graphics in a test are so weak that it runs at 200 FPS. CPU huffs and puffs while the graphics card puts up its feet and waits for the rest of the system to catch up :p

Like I said, literally doesn't matter if I run the test with my GTX 780 at 950 MHz or at 1200MHz, the score and framerate are nearly identical across the board. GPU isn't the limiting factor in this particular benchmark.

about time they come out with a test that pushes the cpu as well;)
That's what this test is doing, effectively... it's so light on graphics that it will run your CPU flat-out if you have any kind of decent graphics card.

If I could get a Core i7 3770k for cheap I'd swap to that (since it would allow me to keep my current mobo + RAM). Would probably improve my score considerably.
 
Last edited:
That's what this test is doing, effectively... it's so light on graphics that it will run your CPU flat-out if you have any kind of decent graphics card.

If I could get a Core i7 3770k for cheap I'd swap to that (since it would allow me to keep my current mobo + RAM). Would probably improve my score considerably.
__________________

No doubt about it...first i seen my cpu pushed to 80% in any graphics test ever....that i5 is killing you;)
 
No doubt about it...first i seen my cpu pushed to 80% in any graphics test ever....that i5 is killing you;)
80%? You might want to double-check your charts. Test #1 and #2 will have lower CPU usage, but your CPU should be at 100% in Test #3 no matter what, and very-nearly 100% in Test #4 (unless you have a weak graphics card holding things back).

I don't have a weak graphics card holding anything back, so I'm running straight into a CPU bottleneck in test #4. CPU usage at 100%, GPU usage so low that the card doesn't even bother boosting.
 
Huh? What are you talking about?

Look at the results link from my first post in this thread, my CPU and GPU are both overclocked: http://www.3dmark.com/sd/2173893

I was just noticing that, in this particular test, my GPU clock doesn't seem to matter much because I'm almost totally CPU-bound.

Wasn't actually addressing you or whatever it is you happen to be feverishly arguing about atm.

But since you asked, I was talking about:
http://www.futuremark.com/support/benchmark-rules
and this quote:
So overclocking isn't allowed now?

Overclocking by manufacturers is allowed provided that it applies equally to all apps, all of the time. Overclocking optimizations that are selectively applied to our benchmarks are forbidden.
But it seems I was confused (takes a guy with a spine to admit when he's confused or wrong or mistaken) and they aren't really talking about video card overclocking, I guess? It seems to be more about optimizations from cell phone manufacturers.
 
Wasn't actually addressing you or whatever it is you happen to be feverishly arguing about atm.
Beside the point, there were plenty of examples of overclocked components already in this thread. Was just pointing-out that fact, since it plainly shows overclocking is allowed.

And you don't have to formally address someone in order for them to help you. This is a public forum, after all...

But it seems I was confused (takes a guy with a spine to admit when he's confused or wrong or mistaken) and they aren't really talking about video card overclocking, I guess? It seems to be more about optimizations from cell phone manufacturers.
They're talking about application-specific optimizations. So yes, Android phones that clock-up when they detect 3DMark (and ONLY 3DMark) wouldn't be allowed.

Makes sense, as results from such a device wouldn't be representative of actual everyday performance.

edit: Physics Score 8244 lol yea your cpu bottlenecked bad
Yeah, I was noticing that. Comparing our physics scores (and only our physics scores, since they're totally CPU dependent):

Core i5 2500k @ 4.5 GHz = 8244 points
Core i7 4770k @ 4.5 GHz = 11973 points

That's a 45% increase in performance at the same clockspeed... which is pretty nuts. The only thing I can think of that would put such a massive gap between these two chips is if 3DMark seriously loves HyperThreading.
 
Last edited:
Core i5 2500k @ 4.5 GHz = 8244 points
Core i7 4770k @ 4.5 GHz = 11973 points
Core i5 2500k @ 4.5 GHz = 8244 points
Xeon X5670 @ 4.2GHz = 12386 points (5 or 6 year old cpu lol)

That's a 45% increase in performance at the same clockspeed... which is pretty nuts. The only thing I can think of that would put such a massive gap between these two chips is if 3DMark seriously loves HyperThreading.

__________________

fixed it for you lol:) fuck it be a while still...before in can upgrade
 
fixed it for you lol:) fuck it be a while still...before in can upgrade
Well, just keep in mind, your performance depends on how threaded a given application is (your CPU with 12 threads is just barely ahead of a more-modern chip with only 8 threads in this bench, for example).

Basically, you have comparable performance to a 4770k if the application you're running spawns enough threads. The fewer threads in-play, the more the 4770k will start to outrun you.

Now, with that said, is it a large enough difference in single-threaded performance to care, just yet? probably not.
 
Last edited:
That is good for an older cpu but its not that old as even if you bought it on launch day its not quite 4.5 years old.

my bad...the motherboard has got to be over 5 years...close to 6 i believe

edit: Unknown-One what was you max cpu usage during the test...was just curious..mine never went over 85% at most....which in it self is pretty good...bf4 as an example uses like 20% and people actually bitch about it being cpu bottle necked...dam what the hell they running?
 
Last edited:
what was you max cpu usage during the test...was just curious..mine never went over 85% at most....which in it self is pretty good...bf4 as an example uses like 20% and people actually bitch about it being cpu bottle necked...dam what the hell they running?
During the last two tests? It was pretty much pegged at 100% all the way through.

Also, there's a key distinction to be aware of when you're talking about CPU usage on multiple cores. Lets say you have a quad core processor, and a game only spawns 2 threads, and those 2 threads use every last scrap of CPU time they can get their hands on...

End result? You're CPU limited even though the CPU graph only shows 50% usage.
 
http://www.3dmark.com/3dm/3357591?

3D Mark is installed on my Steam drive that unfortunately only has 300mb free of 3TB. Damn Steam Summer Sale. :) I really should try OC'ing this R9 290, but it's missing part of it's PCI Express connector so I'm kinda leaning towards leaving well enough alone. My CPU has all of the energy management stuff turned on also. Still not a bad score I think. Maybe I'll try again with my PC more streamlined for benchmarking. :)
FX-9370 @4.7 GHz
R9 290 Stock
1866MHz 8GB memory

Overall Score: 23,105
Graphics Score: 36,112
Physics Score: 8,806
Combined Score: 18,260

3sdUHUEl.png
 
Last edited:
Scored 28439, with 1x Titan and having Diablo 3 run and other processes running in the background.

Haven't tried with both titans yet.
 
During the last two tests? It was pretty much pegged at 100% all the way through.

Also, there's a key distinction to be aware of when you're talking about CPU usage on multiple cores. Lets say you have a quad core processor, and a game only spawns 2 threads, and those 2 threads use every last scrap of CPU time they can get their hands on...

End result? You're CPU limited even though the CPU graph only shows 50% usage.

we can only hope they make games from this point forward using at least 8 threads...or even 12 for me
 
we can only hope they make games from this point forward using at least 8 threads...or even 12 for me

Or a Windows OS that scales programs up to the amount of available threads. Win 8.1 is a lot better than 7, but it still needs work.
 
You guys know that newer nvidia drivers are messed up when running Sky Diver right? I get a black screen on one of the graphics tests and my bench score is shit. Like SLI is not working or something.
 
Or a Windows OS that scales programs up to the amount of available threads. Win 8.1 is a lot better than 7, but it still needs work.
No OS currently in existence can run one thread on multiple cores. Applications have to be multi-threaded from the get-go.

Even if you could force each instruction from a single thread to execute on a different core, you'd very-likely make the application run slower. You'd end up continually running into a condition where the instruction on one core needs the output of an instruction on another core, so everything would have to pause until the core handling the needed instruction finishes.
 
No OS currently in existence can run one thread on multiple cores. Applications have to be multi-threaded from the get-go.

Even if you could force each instruction from a single thread to execute on a different core, you'd very-likely make the application run slower. You'd end up continually running into a condition where the instruction on one core needs the output of an instruction on another core, so everything would have to pause until the core handling the needed instruction finishes.

Doesn't mean that it has to stay this way. I think other ways of multithreading is going to be the future as it is obvious that better CPUs aren't the answer.
 
Doesn't mean that it has to stay this way. I think other ways of multithreading is going to be the future as it is obvious that better CPUs aren't the answer.
Well, the thing is, it's a fundamental problem with how the simplest bits of code execute. Here's an example:

1. I want to know the answer to "X + Y"
2. The answer to X is "A + B"
3. The answer to Y is "C + D"
4. I cannot execute "X + Y" until I've already executed "A + B" and "C + D"

Implications of this example scenario?:
- #2 and #3 can be run in parallel (at the same time) because they have no dependency on one another. These operations can be threaded.
- #4 requires the data from #2 and #3 in order to execute. There is absolutely no way to run #4 at the same time as #2 and #3, because it requires data that has not yet come into existence. Spawning an additional thread for #4 would be pointless, because it has to wait on #2 and #3 anyway.

The only way to work-around this is NOT using operations that have dependencies on other operations, but that's not always possible, and it's still up to the developer programming the application to implement (which circles right back around to what I said originally: applications have to be threaded from the get-go).

Even more issues with getting one thread to run across multiple cores can be found here: http://arstechnica.com/uncategorized/2006/07/7263-2/
 
Last edited:
Well, the thing is, it's a fundamental problem with how the simplest bits of code execute. Here's an example:

1. I want to know the answer to "X + Y"
2. The answer to X is "A + B"
3. The answer to Y is "C + D"
4. I cannot execute "X + Y" until I've already executed "A + B" and "C + D"

Implications of this example scenario?:
- #2 and #3 can be run in parallel (at the same time) because they have no dependency on one another. These operations can be threaded.
- #4 requires the data from #2 and #3 in order to execute. There is absolutely no way to run #4 at the same time as #2 and #3, because it requires data that has not yet come into existence. Spawning an additional thread for #4 would be pointless, because it has to wait on #2 and #3 anyway.

The only way to work-around this is NOT using operations that have dependencies on other operations, but that's not always possible, and it's still up to the developer programming the application to implement (which circles right back around to what I said originally: applications have to be threaded from the get-go).

Even more issues with getting one thread to run across multiple cores can be found here: http://arstechnica.com/uncategorized/2006/07/7263-2/

I still think that someone really smart will figure it out an efficient way to do it. Just accepting things as the way they are isn't in my DNA. Quite sure that someone is trying to tackle the issue right now.
 
cStiBQN.png


4770k @ 4.5 GHz (100x45)
HD 7950 (1100 core, 1400 memory, 14.6 drivers)
 
80%? You might want to double-check your charts. Test #1 and #2 will have lower CPU usage, but your CPU should be at 100% in Test #3 no matter what, and very-nearly 100% in Test #4 (unless you have a weak graphics card holding things back).

I don't have a weak graphics card holding anything back, so I'm running straight into a CPU bottleneck in test #4. CPU usage at 100%, GPU usage so low that the card doesn't even bother boosting.

your cpu bottle necked...im video card bottle necked....only way for my to tell is watching the osd in afterburner...the card peggs 99% while I did see the cpu spike to 92% (in the physics test).....I never saw it reach 100%...so its either bottle necked by the card or its not able to use 12 threads...is the card weak...in comparison to yours yea...but its not holding back any games i play at 1080p.....well except crysis 3...but i doubt even yours holds 60 fps on that game
 
is the card weak...in comparison to yours yea...but its not holding back any games i play at 1080p.....well except crysis 3...but i doubt even yours holds 60 fps on that game
Yeah, I'm not all that worried about my "bottleneck" either, to be honest. Even in Test #4, the benchmark ran at 80+ FPS the whole way through. More than enough for my 60Hz monitors.

As for Crysis 3, 1080p isn't much of a problem, but I like to game at 5760x1200... yeah, have to drop some settings for that :D
 
Back
Top