3dmark Time Spy DX12 test

No, it's a dx12 feature level. 11_0 is the minimum feature set required to be 'dx12 ready', so it runs on a wide range of hardware

It's supposed to be the new DX12 bench yet it the compute queues are a very low % of the total run. Obviously, this isn't what anyone expected thus it's sorta fake, more dx11 than dx12 w/o getting into the nuts and bolts which has been explored here and abroad already.


**Also want to reiterate that yall should not be surprised, its FM after all lol.
 
It's supposed to be the new DX12 bench yet it the compute queues are a very low % of the total run. Obviously, this isn't what anyone expected thus it's sorta fake, more dx11 than dx12 w/o getting into the nuts and bolts which has been explored here and abroad already.

Compute queues are only one part of the DX12 spec, not all of it. (Not to mention that the variation of compute that AMD is good at (async shaders) is optional.)
 
It's supposed to be the new DX12 bench yet it the compute queues are a very low % of the total run. Obviously, this isn't what anyone expected thus it's sorta fake, more dx11 than dx12 w/o getting into the nuts and bolts which has been explored here and abroad already.

Its been debunked here and elsewhere already. 20% of the work being on the compute queues is not little, and futuremark responded very quickly to these complaints. They explain exactly what goes on the compute queue and why this adequately represents the average dx12 game.

A
 
Lmao, 20% is little when its a freaking benchmark. That said, I don't care, I even expected this bias, its FM!
 
FL 11_0 is the core DX12 spec. Maybe you'd prefer that they do a FL 12_1 benchmark instead? Hint: No AMD card adheres to this spec yet; only nVidia does.
actually saw this debated last night, NVIDIA 4or5 of the requirements of 12_1, AMD only 3. So assuming they are correct AMD has better support.
 
actually saw this debated last night, NVIDIA 4or5 of the requirements of 12_1, AMD only 3. So assuming they are correct AMD has better support.

There are two requirements for FL12_1. CR and ROVs. AMD has neither. What are you even saying ?

Edit:

JustReason your posts really throw me off! My instinctive reaction is to assume you know what you're saying is false, but then I can't justify how you think you can get away with saying something virtually anyone is capable of disproving...

wpkHl3C.png


Lmao, 20% is little when its a freaking benchmark. That said, I don't care, I even expected this bias, its FM!

What do you mean 20% is little ? Little compared to what ? The performance gains for GCN are exactly in line with pretty much every developer statement ever.
 
Last edited:
There are two requirements for FL12_1. CR and ROVs. AMD has neither. What are you even saying ?

Edit:

JustReason your posts really throw me off! My instinctive reaction is to assume you know what you're saying is false, but then I can't justify how you think you can get away with saying something virtually anyone is capable of disproving...

wpkHl3C.png




What do you mean 20% is little ? Little compared to what ? The performance gains for GCN are exactly in line with pretty much every developer statement ever.
like I said it was other debate/discussion. I'd guess they are counting teirs below so 12_1 should include all tiers below and hence their numbers.
 
like I said it was other debate/discussion. I'd guess they are counting teirs below so 12_1 should include all tiers below and hence their numbers.

You'd think that's how it would work but it's not. Feature Level 12_1 doesn't require that 12_0 be checked off first.
 
You'd think that's how it would work but it's not. Feature Level 12_1 doesn't require that 12_0 be checked off first.

But Maxwell 2 does meet all the 12_0 requirements, I thought it didn't because I saw tiled resources T1, turns out that's Maxwell 1
 
Okay, I didn't have any support to show that Maxwell/Pascal had 12_0 and 12_1 Feature Levels enabled so I wasn't going to make that claim. If Maxwell 2 and Pascal are 12_0 and 12_1 Feature Level certified that's good.
 
Okay, I didn't have any support to show that Maxwell/Pascal had 12_0 and 12_1 Feature Levels enabled so I wasn't going to make that claim. If Maxwell 2 and Pascal are 12_0 and 12_1 Feature Level certified that's good.

Yeah, I wasn't saying you did haha! Your mentioning that meeting one FL's requirements doesn't entail meeting all the lower FLs just made me double check
 
GCN hardware gets >10% performance increase from async yet fanboys bitch about this benchmark. What the hell are they expecting? They are not happy because Radeons aren't 100% faster than GeForces or what? In any case I just don't understand... I can only think that some kind of twisted fanboyism clouds their judgement.
 
GCN hardware gets >10% performance increase from async yet fanboys bitch about this benchmark. What the hell are they expecting? They are not happy because Radeons aren't 100% faster than GeForces or what? In any case I just don't understand... I can only think that some kind of twisted fanboyism clouds their judgement.

You should see what they're saying about the Asus Strix 480,it's hilarious. The heatsink is to blame, because it's designed for the GTX 1080,makss bad contact with the gpu therefore it clocks badly. Ha. Well, they already blew that excuse on the strix, they're gonna have to get creative. It's always a conspiracy against amd eh
 
You should see what they're saying about the Asus Strix 480,it's hilarious. The heatsink is to blame, because it's designed for the GTX 1080,makss bad contact with the gpu therefore it clocks badly. Ha. Well, they already blew that excuse on the strix, they're gonna have to get creative. It's always a conspiracy against amd eh
Aye... It sucks more power than 1080 and the chip is smaller if I remember correctly so obviously it's going to run hotter. Also Asus fucked up the fan profile. Sure it would run hotter with conservative profile but it wouldn't be an oven.
 
You should see what they're saying about the Asus Strix 480,it's hilarious. The heatsink is to blame, because it's designed for the GTX 1080,makss bad contact with the gpu therefore it clocks badly. Ha. Well, they already blew that excuse on the strix, they're gonna have to get creative. It's always a conspiracy against amd eh
Pretty funny how that stuff goes around. I remember comments about the DCUIII when it first came out that it was made for Fury, so it didn't make good contact with GM100 :p.
 
I just got a GTX 1070 and ran the Time Spy test (free Basic Edition)...does a score of 5714 seem low for the 1070?
 
I ran it last night on my 3.3ghz ryzen 1700 (auto overclock). 7169 with two rx 480 4gb@stock, that i paid $160 for each. Not many games well support crossfire, but i love mining the 22 or so hours im not using the computer.
 
Back
Top