AMD Radeon R9 Nano Video Card PAPER Launch @ [H]

kyle, brent

when are you expecting a nano on the test bench?

is should have a little oc head room right? its a 175w card, but it gets 150w from the 8pin and 75w from the pcie slot.

This is what its PR company put forward. I have heard ZERO about getting actual Nano hardware. Interesting to note, the PR company called it a "paper launch" as well. So whoever was beating on me a page ago for using such harsh un-PC language, can suck it. ;)

Hi Kyle,

AMD has not yet issued any samples of the card. Thursday is the paper launch of the R9 Nano, but it will not be on shelf for a few weeks. AMD will be connecting with press directly regarding sampling.

Please let me know if you’re interested in learning more about AMD’s small form factor GPU and are available for a briefing tomorrow at 1 p.m. PT.
 
Last edited:
Good call on emphasizing paper launch. At first I got bent out of shape, but you are right that this was a bad habit not too long ago and it needs to stop. Kudos to Kyle and team for calling it like it is.
 
I guess the way I feel about the whole "variable clock speed" and "up to 1000Mhz" is this:

600Mhz when you need it, 1Ghz when you don't :p

In all likelihood the only time the full ghz will be available is when you are bouncing of a vsync limiter and your core utilization is at like 40% anyway, which means it could just have clocked lower anyway with no ill effect.
 
As far as SFF goes, been there, done that.

When I returned to the games/overclocking/building hobby in 2009 after a 5 year hiatus I decided, small and sleek was the way to go, so I built an i7-920 system around one of those Shuttle SFF units.

I quickly found that cooling stunk, the proprietary PSU format topped out at 400Watt so when I later got a GTX580 it overloaded it, and I couldn't upgrade (for a while I tried one of those drive bay PSU's that later got a failing [H] review to supplement the proprietary PSU, but that was a major fail)

I also quickly ran out of expansion space. Towards the end of using that system, I wanted more PSU, I wanted more PCIe expansion, and I wanted better cooling. I eventually sold the shuttle system (case, motherboard, CPU and RAM) here on the hardforums, and built a "waiting for bulldozer" mid tower build around a 990fx board and a Phenom II x6 1090T in summer of 2011. When bulldozer was a major disappointment, I bought the x79 + 3930k I am using now, in late 2011, and now I'm in a full tower, and will never go small form factor again.

To me the size doesn't matter. I'm putting it under my desk anyway, it is out of the way and not a problem. I actually like it better, as the SFF went on my desk, and all the fans and noise where closer to my head, and took up desk space.

I have a couple of compact computers serving as HTPC's, but other than that, I learned my lesson from SFF. For my main rig, I'm just never going to be happy with it.
 
it doesn't lower the power enough to refute what I said. it's marginal at best.

You are correct. In my experience it was 2-3% per 10C. Relatively insignificant.

Just got my Fury X. It's basically a full size card if you take in to consideration the cable and tubes jutting out the ass end.

Now the nano will be small. I have this feeling the 970 might match it OCd.
 
Don't underestimate the stupidity of the market. Completely clueless console die-hard fans have been slowly moving to PC and, of those I've come to know, can't do much past finding the power button. It's hilarious seeing them play a FPS game on the PC for the first time. Some rage so hard they never give it a second chance.
 
Personally, as an NCase mITX owner (currently with a Gigabyte SFF 970), I was waiting to see what AMD was going to do with the Fury Nano. If what was just announced is true and stays true when it reaches retail, then I personally won't be buying just yet. If a few months down the road, the price comes down, maybe I'll change my mind (since we're so close to XMas / $$$Bonus time anyway).

When you're a small form factor enthusiast - you really can't afford to be too picky given what's out there. And if you want Halo / Ultra level performance, like always SFF or full size, it's going to cost some $$$.

BTW: I hope AMD copies Brent's suggested pricing strategy ... as it makes much more sense, even if the Nano has to be a cut down card instead of a halo card. Though the rebrands / relabels I suspect were due in part to the need to dispose of leftover inventory of old cards.
 
§kynet;1041821117 said:
I don't either, at all. Seems like AMD asking a premium price has ruffled more than a few feathers, almost like people don't want AMD to be able to sell products at a premium regardless of the performance. Which brings us back to Titan X, terrible value but it keeps getting praised.


This would be around a 320 watt part if not more and you can guess the flak AMD would get for that.

If you want the best, you HAVE to get the Titan X. There is no option. If you want the best you buy a Ferrari, not a Corvette (not to slam corvettes which I love). Sure the value isn't there. But you really don't have an option if you want the best. If AMD came up with something faster, at a higher price, then that super product would get praise at the "cost is no object" target.

GDDR5 would be a no go because the Fury's memory controller is designed for an extremely wide slow bus.

Do we have real sales numbers of the Fury line? Sold out doesn't mean much if they only have 100 sold total.

Either way, I agree with Brent. If AMD couldn't market the Fury cheaper, they shouldn't have made it at all. Soon stock will be sitting on the shelves once the typical inrush of early adopters and die hard fan boys are out.

I had such high hopes for AMD too. :(
 
I don't know if you have a Fury-X on hand, but have you considered using it, underclocking it to 850mhz/900mhz/1000mhz and benching it for a worst/average/best case scenarios for the R9-Nano? Or did the NDA specifically state NOT to do that publish that info?

Well, that assumes that you can trust AMD's assessment on the average clocks.

Past history suggests that AMD, Nvidia and their board partners have a little "optimistic" assessments of these things.

You don't want to wind up in a position of publishing "here's how the Nano will perform" based on 850Mhz testing, only to find that actual clocks are 600 when it actually launches.

The clock speed will probably vary from title to title. It's probably better to just wait and see.
 
I don't know if you have a Fury-X on hand, but have you considered using it, underclocking it to 850mhz/900mhz/1000mhz and benching it for a worst/average/best case scenarios for the R9-Nano? Or did the NDA specifically state NOT to do that publish that info?

AMD has no say in what we do with publicly available hardware. That said, I have no interest in psuedo-testing downclocked Fury X hardware to see what Nano will do. I have no idea what the clocks on the card will be etc. Very well may do it like we have done motherboards in the past and incubate it just like it would be in a tiny quiet SFF case.
 
Soon stock will be sitting on the shelves once the typical inrush of early adopters and die hard fan boys are out.

I had such high hopes for AMD too. :(

If stock is kept very low, then no, there will never be allot of extras sitting around. I also think that as better drivers are released and or better performance under DX12 is released via the press, waves of interest will keep already low stocks at zero.
That's not to say that any of this is a good thing for AMD. Low production means low sales, no matter how many people want them.
 
Either way, I agree with Brent. If AMD couldn't market the Fury cheaper, they shouldn't have made it at all. Soon stock will be sitting on the shelves once the typical inrush of early adopters and die hard fan boys are out.

I had such high hopes for AMD too. :(

That is true, but I highly doubt the current position was one that AMD planned to find themselves in.

They probably did not expect Nvidias willingness to drop the 980 and 980ti as low in price as they did, and I also suspect that they originally had planned to launch this product on a smaller process which fell through due to manufacturing availability.

AMD is probably currently betting that the fact that there are no other performance oriented small form factor cards out there will mean they can get away with charging a higher price for now, until the next generation when they hopefully have things sorted out. Who knows if that will wind up being the case.

Here's to hoping they figure their shit out,and come back swinging with the next gen of cards. (though I feel I have been saying stuff like this with regards to AMD for way too long now. They are starting to feel like the Washington Generals...
 
I don't know if you have a Fury-X on hand, but have you considered using it, underclocking it to 850mhz/900mhz/1000mhz and benching it for a worst/average/best case scenarios for the R9-Nano? Or did the NDA specifically state NOT to do that publish that info?
That would not be reflective of the Nano's actual performance because of temperatures impacting the throttling and clock speeds of the card. No one knows what the Nano will realistically run at.

Kyle, if you do test the Nano, I would really like to see some testing with the GPU installed in an actual SFF case with realistic airflow. Given the constraints these cards are intended to handle, I don't think an open air bench test or even a normal ATX case would be reflective of actual use.
 
If you want the best, you HAVE to get the Titan X. There is no option. If you want the best you buy a Ferrari, not a Corvette (not to slam corvettes which I love). Sure the value isn't there. But you really don't have an option if you want the best. If AMD came up with something faster, at a higher price, then that super product would get praise at the "cost is no object" target.

GDDR5 would be a no go because the Fury's memory controller is designed for an extremely wide slow bus.

Do we have real sales numbers of the Fury line? Sold out doesn't mean much if they only have 100 sold total.

Either way, I agree with Brent. If AMD couldn't market the Fury cheaper, they shouldn't have made it at all. Soon stock will be sitting on the shelves once the typical inrush of early adopters and die hard fan boys are out.

I had such high hopes for AMD too. :(

At these prices, with these performance levels, the Fury line actually destroys the validity of the entire Radeon line. The Fury-line is a HALO series, with it failing to justify it's price it paints a bad image for the entire Radeon line.

For example, I had the following conversation with a friend.

"Hey man, I'm going to get a GTX970"

"You know, I saw the R9-390 on sale for $310, it's generally a faster card."

"Nah, fuck that shit. AMD's stuff is overpriced and slow, I heard how terrible the Fury-X does against the GTX 980 Ti, I'll just stick with nVidia"

I then show him videos and reviews showing the R9-390 beating the GTX970, and even show him my overclocked R9-290x beating my stock GTX970.

"That's nice, but nVidia has better drivers, they'll catch up with driver updates."

Then I show him how in DX12 even the mighty GTX980-Ti falls to the ancient R9-290x.

"That doesn't matter, nVidia has always had better drivers, they'll surpass the Radeon when it matters."

I'm hardware agnostic, and I know AMD is garbage in the Ultra High end, but the 380 is a better value than the GTX960, the 390 is amazingly better than the GTX970, and the 390x is pretty darn close to the GTX980 for a bit less. But all that doesn't matter because the Halo products are considered underpowered and overpriced, and that reputation pollutes the entire line.

The Fury-X is basically like having "Hitler" as a last name in today days and age. Even if AMD had only 500 Fury-X's for sale per month, selling it for $550, undercutting the GTX980-Ti buy $100, would have gotten more positive word of mouth, greater mindshare, and put a bright shining positive light upon the entire Radeon brand, as a leader in value/performance (like the Radeon 5870 and Radeon 4870 did for their respective lines), as opposed the the general veiw that AMD's products are overpriced and underperforming, which is true for the R9-Fury-X, Fury and Nano, but not really true for the 390 and 380. The 390x is arguable, it really should have been $400.
 
That would not be reflective of the Nano's actual performance because of temperatures impacting the throttling and clock speeds of the card. No one knows what the Nano will realistically run at.

Kyle, if you do test the Nano, I would really like to see some testing with the GPU installed in an actual SFF case with realistic airflow. Given the constraints these cards are intended to handle, I don't think an open air bench test or even a normal ATX case would be reflective of actual use.

I would argue that both would be of value.

I agree that most SFF cases have limited airflow, but that isn't necessarily the case for all of them.
 
If you want the best, you HAVE to get the Titan X.
No, if you want the fastest card you buy a factory OC'd 980Ti. Titan X is a total rip off and terrible value no amount of mental gymnastics will change that. But I have zero problems with Nvidia making such a card, if there is a market for it and people buy it good on Nvidia. Same for Nano if people want to buy it at $649 good for AMD.
 
§kynet;1041821885 said:
No, if you want the fastest card you buy a factory OC'd 980Ti. Titan X is a total rip off and terrible value no amount of mental gymnastics will change that. But I have zero problems with Nvidia making such a card, if there is a market for it and people buy it good on Nvidia. Same for Nano if people want to buy it at $649 good for AMD.

If you compare OC to OC The Titan X is still usually the faster card, even if only by a little bit.

I agree, the little bit of extra performance isn't worth the price premium, but his statement is still true, if you want the fastest single GPU card, it's Titan X or bust.
 
That would not be reflective of the Nano's actual performance because of temperatures impacting the throttling and clock speeds of the card. No one knows what the Nano will realistically run at.

Kyle, if you do test the Nano, I would really like to see some testing with the GPU installed in an actual SFF case with realistic airflow. Given the constraints these cards are intended to handle, I don't think an open air bench test or even a normal ATX case would be reflective of actual use.

Hmm. They should have Steve do a SFF case blowout one day with the Nano / mitx 970 vs regular sized cards. I bet they could get some of the larger case manufacturers to send them their best examples. ;)
 
You should put this system in the general hardware thread. There's things that can be better.

Personally if I was doing a low profile PC I'd buy the chassis for "show" (power and optical disc) then use long length wires to a normal PC somewhere else (behind the TV stand or basement, attic, whatever). 15'+ cables are very cheap. Even with buying the cables it saves $ and you don't have to sacrifice power. Also makes it completely silent.

Still in planning, and that always take time, and ofcourse evry part must be excamine, there could be a better 45w or lower cpu that could drive that gpu, better , its an early reasurch stage, i havnt had the urge to build for years :), so im a bit rusty.

But hiding the computer, why would u do that, is it becouse its big and ugly and makes a lot of noise ?
 
Personally if I was doing a low profile PC I'd buy the chassis for "show" (power and optical disc) then use long length wires to a normal PC somewhere else (behind the TV stand or basement, attic, whatever). 15'+ cables are very cheap. Even with buying the cables it saves $ and you don't have to sacrifice power. Also makes it completely silent.

I like that idea - it's the 'Wizard of OZ' approach to SFF building. :D
 
Then I show him how in DX12 even the mighty GTX980-Ti falls to the ancient R9-290x.

I agree with most of this. In the mid range, Radeons make a lot of sense, but don't get a lot of attention because of the halo card fails.

This is why companies make and sell Halo products, even though they only sell in smallish quantities.

That being said, I would argue that current DX12 benchmarks are of limited value.

It could be that the 9xx series geforce cards wind up being a massive fail when it comes to DX12, compared to Radeon cards, but I am going to gamble that is not going to be the case.

Right now we have one title to run benchmarks on and it is in pre-alpha.

As DX12 titles start actually coming out, both AMD AND Nvidia will likely have better optimized drivers in place to make use of the feature set, and at that point I doubt we will see any massive fail on either side. I suspect we will see both scale similarly to how they scale in Dx11, but quite honestly, there is no way at all to know right now. Current DX12 benchmarks are simply not of very much value to make conclusions on.

As I see it, trying to look through the tea leaves into the future and optimizing GPU longevity by shopping for either more VRAM, or better performance in future API's is a fools errand. Just buy what works now, and when the future comes, you can always reassess and buy something else.

I may ahve had my previous Titan a very long time (March 2013 to July 2015) but that is hardly the norm for me. My two 980Ti's are currently great, but I am going to be watching very closely again next generation when Pascal and AMD's successor to the Fury X are coming out.

Ideally I want to sell the 980Ti's and get something that can match their performance in SLI but on one GPU, or at least not fall too far behind.

So, my point was, I'm not going to fret about DX12 now. I probably won't have these GPU's anymore when DX12 is a real issue anyway.
 
Have you put into any consideration on what impact of placing a relatively large source of heat (whether the Nano or 970 doesn't matter, both are still 150w+ at load) into such an enclosure (essentially no active airflow direcly out of the case)? None of these mitx solutions are fully exhausting.

There is the main challange with building SFF, there are limits to evry component, limitied space, limited cooling, thats why building a small, powerful, silent SFF is real challanging, it is difficult, you have to mod, you have undervolt, you have to consider evry silent way there is for cooling, both passiv and with silent fans.

But i recon you see the challanges and can understand the fun of builds like this.
 
Personally if I was doing a low profile PC I'd buy the chassis for "show" (power and optical disc) then use long length wires to a normal PC somewhere else (behind the TV stand or basement, attic, whatever). 15'+ cables are very cheap. Even with buying the cables it saves $ and you don't have to sacrifice power. Also makes it completely silent.

But if you are going to do this, why even have the case for show? :p

If it weren't for my 4k screen requiring short-ish HDMI2 cables, I'd totally stick my case in the basement, and never worry about fan noise again.

I don't have a computer so that it can look pretty (even though I think it does). I have it so it can display things on screen :p
 
Although you can fit a full size video card into a mITX setup (for example, my Ncase can hold a fullsize card easily), but you sacrifice room. The Ncase already required some tradeoff - full size video or full size power supply - but if you go that one or the other route, the internals will be a cramped rat's nest of wires. The solution? Use both a small power supply AND a small video card (as I did). Only then will your case have at least some room (for an extra fan) to circulate air and allow you to keep the temps down even a little.

As for all the TItan X griping I'm seeing in this thread (which is not relevant at all to AMD Nano) - I'm guessing it's from people that don't own one, and just want to bash on people that for various reasons do own one.
 
Zarathustra[H];1041821709 said:
As far as SFF goes, been there, done that.

When I returned to the games/overclocking/building hobby in 2009 after a 5 year hiatus I decided, small and sleek was the way to go, so I built an i7-920 system around one of those Shuttle SFF units.

I quickly found that cooling stunk, the proprietary PSU format topped out at 400Watt so when I later got a GTX580 it overloaded it, and I couldn't upgrade (for a while I tried one of those drive bay PSU's that later got a failing [H] review to supplement the proprietary PSU, but that was a major fail)

I also quickly ran out of expansion space. Towards the end of using that system, I wanted more PSU, I wanted more PCIe expansion, and I wanted better cooling. I eventually sold the shuttle system (case, motherboard, CPU and RAM) here on the hardforums, and built a "waiting for bulldozer" mid tower build around a 990fx board and a Phenom II x6 1090T in summer of 2011. When bulldozer was a major disappointment, I bought the x79 + 3930k I am using now, in late 2011, and now I'm in a full tower, and will never go small form factor again.

To me the size doesn't matter. I'm putting it under my desk anyway, it is out of the way and not a problem. I actually like it better, as the SFF went on my desk, and all the fans and noise where closer to my head, and took up desk space.

I have a couple of compact computers serving as HTPC's, but other than that, I learned my lesson from SFF. For my main rig, I'm just never going to be happy with it.

Thats the beauty off SFF, they are only trouble, like womens, but god you still love them, and want them :)
 
I don't know if you have a Fury-X on hand, but have you considered using it, underclocking it to 850mhz/900mhz/1000mhz and benching it for a worst/average/best case scenarios for the R9-Nano? Or did the NDA specifically state NOT to do that publish that info?

I don't think that would really work. You would have to set power limits and thermal limits. Not only that, that water cooling on the furry x has better cooling. So you would never end up hitting the thermal limit, thus never downclocking under load.
 
If you want the best, you HAVE to get the Titan X. There is no option. If you want the best you buy a Ferrari, not a Corvette (not to slam corvettes which I love). Sure the value isn't there. But you really don't have an option if you want the best. If AMD came up with something faster, at a higher price, then that super product would get praise at the "cost is no object" target.

GDDR5 would be a no go because the Fury's memory controller is designed for an extremely wide slow bus.

Do we have real sales numbers of the Fury line? Sold out doesn't mean much if they only have 100 sold total.

Either way, I agree with Brent. If AMD couldn't market the Fury cheaper, they shouldn't have made it at all. Soon stock will be sitting on the shelves once the typical inrush of early adopters and die hard fan boys are out.

I had such high hopes for AMD too. :(

Given the launch price for the Fury Nano and the binning necessary to get chips that can perform within the target power envelope, I wouldn't be surprised if they are only manufacturing enough to sell to the die hards. It doesn't seem like a bad product. The price is a bit extreme given the competition, but considering the cost to manufacture these cards and AMD's current fiscal situation, I don't know if they could afford to sell these for less. Of course they are probably not going to make much money off of this particular product either way...
 
Zarathustra[H];1041822036 said:
I agree with most of this. In the mid range, Radeons make a lot of sense, but don't get a lot of attention because of the halo card fails.

This is why companies make and sell Halo products, even though they only sell in smallish quantities.

That being said, I would argue that current DX12 benchmarks are of limited value.

It could be that the 9xx series geforce cards wind up being a massive fail when it comes to DX12, compared to Radeon cards, but I am going to gamble that is not going to be the case.

Right now we have one title to run benchmarks on and it is in pre-alpha.

As DX12 titles start actually coming out, both AMD AND Nvidia will likely have better optimized drivers in place to make use of the feature set, and at that point I doubt we will see any massive fail on either side. I suspect we will see both scale similarly to how they scale in Dx11, but quite honestly, there is no way at all to know right now. Current DX12 benchmarks are simply not of very much value to make conclusions on.

As I see it, trying to look through the tea leaves into the future and optimizing GPU longevity by shopping for either more VRAM, or better performance in future API's is a fools errand. Just buy what works now, and when the future comes, you can always reassess and buy something else.

I may ahve had my previous Titan a very long time (March 2013 to July 2015) but that is hardly the norm for me. My two 980Ti's are currently great, but I am going to be watching very closely again next generation when Pascal and AMD's successor to the Fury X are coming out.

Ideally I want to sell the 980Ti's and get something that can match their performance in SLI but on one GPU, or at least not fall too far behind.

So, my point was, I'm not going to fret about DX12 now. I probably won't have these GPU's anymore when DX12 is a real issue anyway.

Nope. The 290x will continue pounding the GTX980Ti into submission in DX12, unless nVidia pays off Devs again.

First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...
Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:
Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.
This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.
Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:
A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:
What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):
As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn't hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.
nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2) though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):
People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.
But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.
People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That's your answer.

AMD may be trying to justify their ridiculous prices with this being the fact of the situation, the issue is that it's a moot point because it's going to be many years before we have as many DX12 titles as we have DX11 titles. Remember, Skyrim isn't that old and it's DX9, so just because most of AMD's line (290x and higher) obliterates the 980-Ti and Titan-X in DX12, doesn't mean it actually matters.

A video which talks about Ashes of the Singularity in depth: https://www.youtube.com/watch?v=t9UACXikdR0


Don't count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it's all on the developer to make efficient use of both nVIDIA and AMDs architectures.
Copy and Pasted from: http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843


I am GOING to get a GTX980-Ti to replace my 290x, despite the fact the 290x will be faster than the 980Ti in DX12, because by the time a decent amount of DX12 titles are available to justify putting the 290x back in, I would probably of upgraded by then, probably to a GTX1080-Ti or R9-Fury-II.
 
Last edited:
That would not be reflective of the Nano's actual performance because of temperatures impacting the throttling and clock speeds of the card. No one knows what the Nano will realistically run at.

Kyle, if you do test the Nano, I would really like to see some testing with the GPU installed in an actual SFF case with realistic airflow. Given the constraints these cards are intended to handle, I don't think an open air bench test or even a normal ATX case would be reflective of actual use.

Well i totally agree, with u there, this card is ment for SFF, and should be tested as surch, couse we who wants it, many of us will put it in as small a space as we can :)
 
Nope. The 290x will continue pounding the GTX980Ti into submission in DX12, unless nVidia pays off Devs again.

*snip*
You can predict the future of all DX12 games based on a single AMD-sponsored benchmark. That is impressive... :rolleyes:
 
Zarathustra[H];1041821824 said:
I would argue that both would be of value.

I agree that most SFF cases have limited airflow, but that isn't necessarily the case for all of them.

Thats not an easy one, what is SFF ?. is it SFF when u have room for STD, gpu, Std psu , 3,5 HDD and so on, or is just a small tower....., personally i call thouse cases hybrids...
 
You can predict the future of all DX12 games based on a single AMD-sponsored benchmark. That is impressive... :rolleyes:

Read the diatribe underneath. It's down to how the architecture is designed and the fact that the architecture in GCN is better suited for DX12 than Maxwell. This is not so much a prediction, but as much a statement of fact regarding how each architecture is designed, just as vallid as saying the Sun is going to come up tomorrow. Is there a chance the Sun may not come out tomorrow, sure, but physics and math would argue otherwise.
 
Read the diatribe underneath. It's down to how the architecture is designed and the fact that the architecture in GCN is better suited for DX12 than Maxwell. This is not so much a prediction, but as much a statement of fact regarding how each architecture is designed, just as vallid as saying the Sun is going to come up tomorrow. Is there a chance the Sun may not come out tomorrow, sure, but physics and math would argue otherwise.
You can't judge an entire API based on one game, no matter how long the diatribe is. :p
At the very least if AOTS were unassociated with AMD+Nvidia I might be more inclined to trust it. The game started off as a Mantle tech demo.

Ark is supposed to get a DX12 patch today. Maybe we should benchmark that game.
 
You can't judge an entire API based on one game, no matter how long the diatribe is. :p
At the very least if AOTS were unassociated with AMD+Nvidia I might be more inclined to trust it. The game started off as a Mantle tech demo.

Ark is supposed to get a DX12 patch today. Maybe we should benchmark that game.

I'm pulling for Arkham Knight to get a DX12 patch (dreaming, I know) before it gets re-released. You'll see this across the board for DX12, so if you got the game, and the hardware, feel free to test the assessment. I'm pretty sure you'll see the Sun come out tomorrow BTW :D
 
Last edited:
I'm pulling for Arkham Knight to get a DX12 patch (dreaming, I know) before it gets re-released. You'll see this across the board for DX12, so if you got the game, and the hardware, feel free to test the assessment. I'm pretty sure you'll see the Sun come out tomorrow BTW :D
You can only hear the phrase "AMD is about to make a comeback" so many times before you become immune to it. It's been the same thing since Bulldozer... Nearly 10 years. People said the same thing about HBM this cycle (I was hopeful, too) and it was a flop.

So forgive me if I am skeptical of early DX12 benchmarks as it seems to be the same situation we've seen a dozen times already. People were excited for the Nano and AMD even managed to disappoint again. I don't think they're capable of doing anything right. It's like a broken record, I can't take it seriously anymore, it makes no sense to set yourself up for disappointment over and over again. So I'm derailing that hype train until it actually materializes.
 
You can only hear the phrase "AMD is about to make a comeback" so many times before you become immune to it. It's been the same thing since Bulldozer... Nearly 10 years. People said the same thing about HBM this cycle (I was hopeful, too) and it was a flop.

So forgive me if I am skeptical of early DX12 benchmarks as it seems to be the same situation we've seen a dozen times already. People were excited for the Nano and AMD even managed to disappoint again. I don't think they're capable of doing anything right.

What, i can ofcourse understand thoose fools that thought Nano would come cheap is disapointed, for us that understood what Nano was aimed at, how can we be disapointed, its a full SFF Fijii, with just a tad lower clock, Amd is aiming to give SFF the best it possibel can, my hope is that Nvida will do the same, but i dont know if they can get the 980TI down to SFF size, but at least they could give it a go with the 980.
 
You can't judge an entire API based on one game, no matter how long the diatribe is. :p
At the very least if AOTS were unassociated with AMD+Nvidia I might be more inclined to trust it. The game started off as a Mantle tech demo.

Ark is supposed to get a DX12 patch today. Maybe we should benchmark that game.

The tearful bros that have now turned their hopes to the latest fantasy - that DX12 will somehow be the magic savior of AMD GPU's, are in for a very rude and sobering awakening. Nevermind DX12 will take years to hit its stride, years AMD doesn't really have, and NV isn't exactly going to be standing still.

And when benchmark after benchmark rolls in and there's no performance edge for AMD, they'll blame it on Nvidia or Gameworks or biased reviewers or the wrong games being reviewed. Like clockwork.
 
Last edited:
ARk is releasing it's DX12 path today so we will have another game to compare it too. I don't think it will give loads of performance increase like RTS games which soak up the CPU cycles but we will see.
 
Back
Top