390X coming soon few weeks

EagleOne

Gawd
Joined
Sep 13, 2005
Messages
690
29f3slu.png


The news is coming from the LinkedIn profile of Linglan Zhang, who is currently employed by AMD as the System Architecture Manager. His profile lists that he is working on a new GPU SOC chip that uses the 2.5D design, and rocks a TDP of 300W. We should expect the new GPUs from AMD to be made on the 28nm architecture, since the 16nm and 20nm die shrinks are now at least six months or more away - from both AMD, and NVIDIA.

Comparing GDDR5 against HBM is something that people need to start really looking at, as the I/O per chip on GDDR5 is just 32-bit, while the 4-Hi HBM 'Stacked DRAM' pumps things up to a huge 1024-bit. Max bandwidth per min on GDDR5 is just 7Gbps, while HBM sits at 1GBps. The max bandwidth of GDDR5 sits at 28Gbps, while the HBM technology can scale between 64Gbps and 256Gbps.

When can we expect the new Radeon R9 390X and R9 380X? Well, the reports state that AMD is already taping out the new R9 380X, which means we could see GPUs in consumers' hands in the coming weeks, so we might see AMD unleash these new cards before NVIDIA kicks off its GPU Technology Conference in March, where we might see something unveiled - last year, we saw the GeForce GTX Titan Z for example. Whatever happens, it's exciting times in the world of GPUs, that's for sure.

what do you think?
 
Fluff.
He has no idea what he is talking about.

Side note- I was under the impression that HBM would be manufactured and packaged by GF...
 
There have been a lot of rumours floating around about AMD releasing new high-end cards using HBM some time in Q1 2015 but I won't believe anything until the official word.

P.S. If they're still taping out the chip now it will be ages, that would mean they hadn't produced even a single test chip and might have to do a respin or two before the final silicon. Between tape out and retail availability can be 6 - 12 months for GPUs.
 
There have been a lot of rumours floating around about AMD releasing new high-end cards using HBM some time in Q1 2015 but I won't believe anything until the official word.

P.S. If they're still taping out the chip now it will be ages, that would mean they hadn't produced even a single test chip and might have to do a respin or two before the final silicon. Between tape out and retail availability can be 6 - 12 months for GPUs.

Fiji definitely taped out quite awhile ago. The +500mm2 GPU which is sometimes referred to as Bermuda, supposedly taped out shortly after that.

I've been hearing Q1 for quite awhile now, so I'm thinking that is pretty solid.
 
I would assume all of that is under NDA and may put his job on the line.

But if that graph is remotely close enough with memory bandwidth then with 4 Hi HBM you could get 1280 GBps of memory bandwidth.......
 
I really wish it were true. I really want to upgrade from my 290 but going to a 980 hardly seems worth it. I figure a 390X will be a good jump.
 
I found this on "TweakTown news" it was out for 45 minutes old, I copied and pasted it here for you guys I did not write this, is it possible in 2nd Quarter 390X?
 
Last edited:
That BW will be epic on DirectCompute situations that would otherwise be starved by the split lane memory bandwidth we have now. For gaming though I doubt we see it, we will have to wait on to see how many ROP's it has and what else we can push those shader cores to do. Other wise we are looking at a core that could be 30%'ish faster than the 290X in games. Educated guestimate is we will see 96 ROPs, standard GDDR5 and 4k shader cores. Essentially for gaming we will get a few extra clusters if they are sticking to GCN 2.0.

If this GPU really does have 4000 compute cores I wonder how it will stack up to Intel's offering? Will be interesting, I think it would be in AMD's interest to get these things into a bunch of farms especially if it has access to over 600gb/s of bandwidth.
 
Educated guestimate is we will see 96 ROPs, standard GDDR5 and 4k shader cores. Essentially for gaming we will get a few extra clusters if they are sticking to GCN 2.0.

My guess as well. I think this will likely be "380x" which will arrive in March competing with GTX 980, then "390x" with HBM will arrive later in the year (maybe late Q2) to compete with GTX 980 ti. And I will eat my shoe if 380x's bandwidth is anything near 600gb/s.

I do think one thing is clear from rumors over the last month: we're stuck on 28nm until late this year/early 2016.
 
I thought it was already determined that this new 'beast' was only like the 370X or 360X and not the 390X. :rolleyes:
 
I hate it when they show specs against a competitor with different architecture. It's meaningless.

Let me know when it's actually here. I never believe hype.
 
AMD likes to paper launch their products (Hawaii). If they had anything coming soon there would already be an event set up with invites sent out to press. So it's probably not going to be coming any time soon.
 
AMD likes to paper launch their products (Hawaii). If they had anything coming soon there would already be an event set up with invites sent out to press. So it's probably not going to be coming any time soon.

really? and nvidia doesnt paper launch? it is the world we live in.
 
AMD likes to paper launch their products (Hawaii). If they had anything coming soon there would already be an event set up with invites sent out to press. So it's probably not going to be coming any time soon.

Exactly my fist thought. Most likely a paper launch if anything.
 
I don't put any credibility in leaked specs, they are usually wrong.

That being said, if the 390X supports mixed rotation/resolution Eyefiniti as is rumored, and it offers a real performance boost over my Titan, without being WAY too expensive, I'll buy it on launch day.
 
No they don't. Go check the last several releases. Meanwhile we are still waiting for freesync a year later.

Well, AMD has done many paper launches, though not all of their launches are. However Freesync is a bad example; AMD isn't the company manufacturing monitors.

Nvidia has also done paper launches, as have Intel and most other companies. But AMD does do it more often than most other companies.
 
well.. AMD are famous for keeping things under wraps and releasing out of the blue... which pisses me off
 
300W TDP (if true)... this thing will cook bacon and eggs more effectively than a seasoned cast iron skillet on a commercial grade gas range! They must still be using the same design software that screwed up Bulldozer.

If there is one area I had to pick that I really wish AMD would show some signs of actual improvement in for both their CPU and GPU offerings, it's power efficiency.

/rant

Alas, I've been anxiously awaiting this beast to arrive and see what it can do!
 
300W TDP (if true)... this thing will cook bacon and eggs more effectively than a seasoned cast iron skillet on a commercial grade gas range! They must still be using the same design software that screwed up Bulldozer.

If there is one area I had to pick that I really wish AMD would show some signs of actual improvement in for both their CPU and GPU offerings, it's power efficiency.

/rant

Alas, I've been anxiously awaiting this beast to arrive and see what it can do!

It's not surprising that the TDP would be on the high side, as it was designed for the 20nm process, but the 20nm process isn't ready so it's launching on 28nm.

Makes you wonder about the cooling solution though.
 
Neither Nvidia nor AMD will get my business.

That's no problem, But why are you so hung up on a Die shrink? You mean to tell me that if AMD or Nvidia release cards on 28nm that perform better you won't even consider them because they are 28nm?

And things have changed now, a die shrink doesn't mean massive performance increases anymore.

If the rumour mills are to be believed than it's more than likely there will be no die shrink until 2016. You willing to wait that long?

I just don't understand the mind set of basing a purchase purely on die size.
 
Why? Because as previously mentioned, TDP is way too high - hell, I'd probably say that even 200W is too high. I also think that the cards ought to be going DOWN in price as they continue to use a node that is old rather than some of the cost coming from switching to the new process node.

It's frustrating, too, to keep hearing that the next product will finally surpass 28nm... and then NOPE. I actually had been hopeful that this one would finally surpass 28nm because of rumors, but no, they were wrong again. It seems that people are simply spreading that rumor about every new GPU. I guess that's nothing new. Getting REALLY old. People should kindly STFU about it when they clearly have no idea.

I can EASILY wait until 2016 if not 2017. I play games that are good, not games that are meant to stress my GPU but which are good for nothing else. :D My GTX 570 will manage. I have yet to find a game that I can't play @ 1080p with highest settings and maybe 2-4x antialiasing. It did struggle with some games at 2560x1440 high settings, but that problem got "fixed" when I cracked the panel to that monitor during a move.
 
Last edited:
Well considering how difficult the transition to 28nm was, I am not really surprised that they are skipping 20nm. As for cards coming down in price, well, look at the 970/980 they are pretty ok pricing compared to the previous generation.

It's good that you can wait that long :) If you only care about games that are good, then your 570 might less you a lot longer!!
 
TDP is on par in relation to specs. Of course power efficiency is important and Nvidia is doing a great job combining that with performance. But when it comes to high end GPUs, most people put performance over power efficiency. I will take the gas guzzling beasts over lesser performers, personally. I know I speak for many.
 
Why? Because as previously mentioned, TDP is way too high. I also think that the cards ought to be going DOWN in price as they continue to use a node that is old rather than some of the cost coming from switching to the new process node.

It's frustrating, too, to keep hearing that the next product will finally surpass 28nm... and then NOPE.

I can EASILY wait until 2016 if not 2017. I play games that are good, not games that are meant to stress my GPU but which are good for nothing else. :D My GTX 570 will manage.

That's the problem with using an older and larger process node: yield per wafer doesn't increase because of a die shrink, so overall R&D+manufacturing prices may actually increase with new product runs over time. Of course, this is also very dependent on the customer demand for the new products... if they can't tame the TDP, then they are not going to get as much sales volume in this age of low power/low heat focus.

So what if the 390X comes out of the corner and happens to be able to take on an SLI 970 or XF 290X (theoretical example purposes only)? If the end-user needs to pay $1500+ per GPU because of the insane cooling solution included with it or buy a couple/few hundred dollars worth of equipment in addition just to keep the heat down as well as a pricy high wattage PSU to keep it fed, it makes the competition with an already amazingly power efficient architecture (that is only going to get better and better with each revision and new generation) and outstanding price point way more attractive.
 
300w and almost the same performance as SLI 970? Would that change the perspective?
 
AMD likes to paper launch their products (Hawaii). If they had anything coming soon there would already be an event set up with invites sent out to press. So it's probably not going to be coming any time soon.

At least they don't use wood screws and 2-by-4s for video card mock-ups
 
300w and almost the same performance as SLI 970? Would that change the perspective?

For me: Depends on the how efficient the cooling solution is. If this things runs hot...and it probably will unless it ends up being a triple slot wide form factor with a massive and effective HSF...then no thanks.

I'd much rather deal with 2 graphics cards that are ~165-175W TDP each at full load then have 300W TDP contained on a single card.

Each of my 780s is ~250W TDP and they both run hot as hell under full load, so I would never consider a next-gen GPU with even more TDP.
 
I'd have to say my 970s in SLI were very cool but I also hope AMD doesn't go with a hybrid solution as an only option. I already have an H100i on my 5820k, I don't need another AIO in my rig.
 
So what if the 390X comes out of the corner and happens to be able to take on an SLI 970 or XF 290X (theoretical example purposes only)? If the end-user needs to pay $1500+ per GPU because of the insane cooling solution included with it or buy a couple/few hundred dollars worth of equipment in addition just to keep the heat down as well as a pricy high wattage PSU to keep it fed, it makes the competition with an already amazingly power efficient architecture (that is only going to get better and better with each revision and new generation) and outstanding price point way more attractive.

What a completely one sided and inaccurate post, way to completely over estimate everything on the AMD side of things. LOL You could cool the reference 290 with the gelid icy cooler for about $25, and it worked perfectly. The $1500 GPU you refer to is a dual card solution, the 295 x2, and was still half the price of the Titan Z. And what expensive PSU do you need? I am running my 290 perfectly on a corsair 650 watt one.
 
There's been talk/rumor for quite some time now that the 390X may be a water cooled only part, so...

As you've done with mine, your post is taken with a grain of salt.
 
Back
Top