390X coming soon few weeks

Oh silly season.

Japan roadmap was for professional/server parts/markets.

There tends to be some lag-time between the consumer GPU parts and the professional GPU parts.
Previously it was ~12months, now it is down to 6-8months. CPUs tend to hit the market a bit faster, depending on the SKU. x2150 was a week or two I think.
 
Not at 4K perhaps, but plenty are at 1080p, especially strategy titles :)

That's an artificial limitation created by the engine used, the CPU isn't bottle necking the game, the CPU is struggling to push instructions to the GPU's to push out greater frame rates.

With or without Mangle/DX12/Vulcan if the engine only requires 15000 draw calls per frame you will see the same limitation in games as you do today.
 
That's an artificial limitation created by the engine used, the CPU isn't bottle necking the game, the CPU is struggling to push instructions to the GPU's to push out greater frame rates.

With or without Mangle/DX12/Vulcan if the engine only requires 15000 draw calls per frame you will see the same limitation in games as you do today.


The way, we hear CPU's progress, incremental, but GPU progress exponential, in future, even the latest top end CPU will create a bottleneck for the GPU.
 
I still running a Q6600 lol...I'll probably just wait for skylake before I build a new PC now.

Aren't most games more GPU dependent now? Even the AAA games like Stat Citizen. Is there any real benefit in buying a high end CPU like a 6-core or 8-core?

Yes today games are more gpu dependent but q6600 is fairly old and definetly a bottleneck. I noticed a big jump going from overclocked q9550 to my current 4.7 ghz overclocked 2500K sandy bridge. However this is the point CPU tech has halted. The sandy bridge, even mere quadcore without hyper threading like mine, is so powerful that there is no point upgrading anywhere. Improvements on ivy bridge and so on are marginal. There is nowhere to go. And actually im glad. Saves money for kickass GPU's.
 
I thought DX12 was supposed to make games less CPU dependent like Mantle? Am I wrong there?

No.
Mantle already shows what happens with game engines done for dx11.
New game engines made for dx12 will show the benefits a lot better.
The stardocks engine shows 400-600% which means 10 fps with todays hardware dx11 to 40-50-60fps using the same hardware but DX12 instead. (api/engine)
It already show a massive improvement if you make the game engine for dx12.
AMD changed the gaming industry with Mantle in one single move they did more for us gamers than any other has done ever.
 
Yes today games are more gpu dependent but q6600 is fairly old and definetly a bottleneck. I noticed a big jump going from overclocked q9550 to my current 4.7 ghz overclocked 2500K sandy bridge. However this is the point CPU tech has halted. The sandy bridge, even mere quadcore without hyper threading like mine, is so powerful that there is no point upgrading anywhere. Improvements on ivy bridge and so on are marginal. There is nowhere to go. And actually im glad. Saves money for kickass GPU's.

Sandy to Haswell was about 17% IPC improvement on average according to Anandtech.

Skylake might finally prove to be a worthwhile upgrade, and should, hopefully, give 25% IPC improvement over Sandy Bridge. And it only took what, TWO die shrinks, and TWO uarch changes to get there lol :eek:
 
i went from sandy to haswell (devils canyon) and it was a much more significant upgrade than i thought it would be..
 
i went from sandy to haswell (devils canyon) and it was a much more significant upgrade than i thought it would be..

Did Haswell clock much better? It'll be the 17% multiplied by your clock difference as well.

I.e if your clock is 20% higher it'd be 1.2*1.17 = 1.404 or 40.4% increase in performance.
 
Last edited:
i went from sandy to haswell (devils canyon) and it was a much more significant upgrade than i thought it would be..

Interesting.

Where did you see the biggest impacts?

My thought process has always been that in sandy->haswell you'd see benchmark improvements and some improvement in rendering/encoding times but that in every day use they'd be virtually indistinguishable from each other.

I certainly have not had any reason to upgrade my 3930K yet, but then again, it's a Sandy-E.
 
GPU's are in a performance/parallelism kick right now. Keep thermal performance in a certain range (TDP) and try to get as far as possible with that.

CPU's are in a major efficiency kick. They are trying to cut as much power use while still retaining performance.

Both are focusing on different things so we'll likely see some crazy things on both fronts...but I feel CPU's will be at their current performance level for a while without any massive or huge gains. But we will see them in ever smaller and more interesting form factors.
 
Zarathustra[H];1041518855 said:
Interesting.

Where did you see the biggest impacts?

My thought process has always been that in sandy->haswell you'd see benchmark improvements and some improvement in rendering/encoding times but that in every day use they'd be virtually indistinguishable from each other.

I certainly have not had any reason to upgrade my 3930K yet, but then again, it's a Sandy-E.

mine was a 2500K OCed to 4.5, which is where the DC sits, so clocks weren't the improvement.

Basically desktop responsiveness, loading times and general game performance is where is saw the improvements. I tend to play a lot of

RTSs and ARPGs, so increased IPC will certainly have an impact.

Moving the platform form P68 to Z97 im sure also contributed.
 
The way, we hear CPU's progress, incremental, but GPU progress exponential, in future, even the latest top end CPU will create a bottleneck for the GPU.

It was the API used (as well as how it was used) in the engine that kept CPU's from mattering in games for the longest time. And it will depend on what they want to do in the future even if they use the new API's in future engines. If they want to create a 300k draw call frame they can and then the CPU would matter, if they want to create a frame with 15k then the CPU wouldn't.
 
mine was a 2500K OCed to 4.5, which is where the DC sits, so clocks weren't the improvement.

Basically desktop responsiveness, loading times and general game performance is where is saw the improvements. I tend to play a lot of

RTSs and ARPGs, so increased IPC will certainly have an impact.

Moving the platform form P68 to Z97 im sure also contributed.

Interesting.

In my own builds around the house (currently my i7-3930K, stepsons FX-8350, HTPC1 A10-7850K, HTPC2 E-350) I don't notice much difference in desktop responsiveness at all despite the VAST difference between them in performance. They all have SSD's and sufficient RAM and their desktops are responsive and perform well.

Granted they are all on Linux though.

There are definite differences at load with certain things (the 350 - for instance - is awful if a package update triggers a recompilation of the Nvidia kernel driver) but for general desktop use (browsers, videos, pictures, etc) there is barely a noticeable difference between them.
 
Zarathustra[H];1041519248 said:
Interesting.

In my own builds around the house (currently my i7-3930K, stepsons FX-8350, HTPC1 A10-7850K, HTPC2 E-350) I don't notice much difference in desktop responsiveness at all despite the VAST difference between them in performance. They all have SSD's and sufficient RAM and their desktops are responsive and perform well.

Granted they are all on Linux though.

There are definite differences at load with certain things (the 350 - for instance - is awful if a package update triggers a recompilation of the Nvidia kernel driver) but for general desktop use (browsers, videos, pictures, etc) there is barely a noticeable difference between them.

sums it up why Intel dont offer anything new anymore.
when the ssd you buy allows the better user experience which is noticeable.

the future is amd.
 
The way, we hear CPU's progress, incremental, but GPU progress exponential, in future, even the latest top end CPU will create a bottleneck for the GPU.

it depends.

A lot of the increase in demand on GPU's comes from increasing resolution. This increase in demand on the GPU is exponential. The increase in demand on the underlying CPU typically isnt.
 
Someone posted in the displays forum that the 390x will be Display Port only, no HDMI. I haven't heard that before. Why would AMD offer up a card without HDMI 2.0 support at this point? That would be pretty dumb, to exclude such a big part of the market.

If that were the case I'd probably go with a Nvidia card.
 
Because pretty much any monitor made in the last 8 years has dp on it.

And if it doesn't, they make adapters. Passive adapters are like 10 bucks and can do everything up to a 1920x1200 just fine.
 
No.
Mantle already shows what happens with game engines done for dx11.
New game engines made for dx12 will show the benefits a lot better.
The stardocks engine shows 400-600% which means 10 fps with todays hardware dx11 to 40-50-60fps using the same hardware but DX12 instead. (api/engine)
It already show a massive improvement if you make the game engine for dx12.
AMD changed the gaming industry with Mantle in one single move they did more for us gamers than any other has done ever.

Yeah, that's quite an exageration.

Mantle does, and DX12 will reduce CPU load during GPU rendering. Neither do very much for raw GPU performance. A little bit, but not much.

We are really talking about CPU load here.

You might notice a huge difference in very specific tasks if you are on a system with a very low end (or low power) CPU, but for a typical [H] gaming rig, that difference will be small enough to not be noticeable.

More and more systems are mobile (laptops, netbooks, x86 tablets) and as such have very low clocked low power CPU's. This is the focus of mantle/DX12.
 
Because pretty much any monitor made in the last 8 years has dp on it.

And if it doesn't, they make adapters. Passive adapters are like 10 bucks and can do everything up to a 1920x1200 just fine.

Not good enough for a HDMI 2.0 4k TV. DP to HDMI 2.0 adapters aren't even close to being released, and when they are they'll probably cost ~$100. At that price I might as well just go for a card with HDMI 2.0 support.

Without HDMI 2.0 support AMD is just cutting out a big section of their market. Now that HDMI 2.0/HDCP 2.2 4k TVs are coming out with 4:4:4 support, it makes little sense to drop HDMI 2.0.
 
But, ugh, I don't have a DisplayPort cable.
I still use DVI. Crime of the century, lock me up boys.

My ZR30W is stuck with DVI and I sure don't mind.

mine was a 2500K OCed to 4.5, which is where the DC sits, so clocks weren't the improvement.

Basically desktop responsiveness, loading times and general game performance is where is saw the improvements. I tend to play a lot of

RTSs and ARPGs, so increased IPC will certainly have an impact.

Moving the platform form P68 to Z97 im sure also contributed.

You have nearly the opposite experience of what I got in going from an i7 950 OCed @ ~3.8 to a 3930K at 4.6. I saw just about zero improvement in desktop responsiveness, though that could have been from me using SSDs in RAID 0 with 16GB of ram (in dual channel, blech). For games there was a modest improvement in Civ 5 late game end-of-turn wait times, but that's about it.

The big changes were from OC'ed 5970 / 5870 trifire down to a GTX580 and now to the 7970 water cooled trifire I'm still using. Especially since I don't generally play FPS games it's been smooth sailing for 60frames at 1600p.
 
I was on a spinner at the time.

Didnt upgrade to SSD till after the DC build.
 
Zarathustra[H];1041519394 said:
it depends.

A lot of the increase in demand on GPU's comes from increasing resolution. This increase in demand on the GPU is exponential. The increase in demand on the underlying CPU typically isnt.


For GPU manufacturers, it doesn't matter, if their GPUs will be bottlenecked with incremental increase in CPU performance?
 
The actual work being asked to be done is the largest contributing factor to whether or not a bottleneck exists. Theoretically a i7-4790k CPU can bottleneck its own IGP and a GTX Titan X could be the bottleneck even if paired with the slowest desktop CPUs. In practice for actual real world gaming you won't run into those extreme situations however.

The reason is not all work requires equal amounts of CPU time and GPU time. For example what was mentioned was resolution increasese, this would be work that places a very lopsided demand on the GPU relative to the CPU, and as such any increase in GPU performance has a large impact on end performance even if CPU performance remains constant.
 
At this point I feel like my 2600k at 4.5 ghz will be run until failure....there's so little incentive to change it. It just hums along and handles everything thrown at it without breaking a sweat, and has since what - 2011?
 
Ah, my bad, I read that as getting rid of DVI not HDMI. Honestly though, I can't even say I have ever hooked a computer monitor up using hdmi. But then again I don't try to use a tv as a computer monitor :p
 
For GPU manufacturers, it doesn't matter, if their GPUs will be bottlenecked with incremental increase in CPU performance?

No, because it won't likely be bottlenecked on the CPU end.

Most of the increase in GPU load comes from increasing resolution, which happens over time.

Increase in resolution has absolutely zero impact on CPU load in most engines though.

Going from 1366x768 to 1920x1080 increases pixel count by 98%

Going from 1920x1080 to 2560x1440 increases it by an additional 78%

The next step up to 4k increases pixel count by another 125%

In all of these cases though, CPU load in most games should be roughly the same.

Resolution keeps going up, but game engine CPU load doesn't rise as fast, and with newer titles and the multithreading and overhead improvements in the likes of mantle and DX12 this is even less of an issue.

I don't think we need to worry about top end GPU's outpacing top end CPU's to the point where top end CPU's become the bottleneck any time soon.

There will always be exceptions with poorly programmed titles (or older titles that tended to pin one core, and leave the others untouched) but here the problem is the software, not the hardware.

Red Orchestra 2 was like this at launch. The realistic in game bullet physics on a 64 player server coupled with the rather inefficient code they were using on launch made it impossible to get smooth 60+fps frame rates on any system with an AMD CPU.

This was the main reason I got fed up at the bulldozer launch. (My OC:ed 1090T wasn't enough, and Bulldozer's per core performance looked to be a downgrade based on early reviews, so I got a 3930k.

Since then they have done a lot of code optimization, and CPU load has dropped significantly.

Even so, RO2 was the perfect storm of a title with unusually high physics requirements, poor PhysX offloading, unoptimized code at launch, and stuck in DX 9 without even the DX11 mulitreading to help it (and yes, this actually made a difference in RO2 once drivers started supporting it)

Outside of extreme cases like this, I doubt CPU bottlenecking will be an issue most people with even semi-recent CPU's will have to worry about for a long long time.
 
Ah, my bad, I read that as getting rid of DVI not HDMI. Honestly though, I can't even say I have ever hooked a computer monitor up using hdmi. But then again I don't try to use a tv as a computer monitor :p

Meh. DVI and HDMI are the same electrically. A simple adapter (or cable with DVI on one end, and HDMI on the other) is all it takes. So not a big issue.

I would be less happy if they got rid of DVI/HDMI all together, as the DP adapters are not as good, and cost more, in my experience.
 
HDMI is important for compatibility with devices such as Receiver/Amps.

Does DVI carry sound?
 
HDMI is important for compatibility with devices such as Receiver/Amps.

Does DVI carry sound?

On most newer gpus yes, a DVI to hdmi adapter will transmit sound. Not sure if it's an official DVI standard or just something GPU manufactures do.
 
Iv been waiting for this card for so long I had a dream that it came out, and smashed the Titan X by like 40% and it was half the price and there was a huge riot over Titan X owner'-s going krazy. Lol
 
feels like a fresh take on new tech with the HBM.
a card to frame in the wall after a few years of use.
8 weeks to go
 
Iv been waiting for this card for so long I had a dream that it came out, and smashed the Titan X by like 40% and it was half the price and there was a huge riot over Titan X owner'-s going krazy. Lol

You need to get out more buddy... :eek: :D
 
My amd/amd (cpu/gpu) rig is begging for a new video card. It's a 1090T/HD6870 and the other household rigs keep laughing at it.

The 1090T is doing fine for the gaming it does. I must resist the R9 290 impulse. (Currently using a single 1080 screen. My plan is to put a 1440 on this one (after gpu upgrade), then probably swap the cpu to an i7 next spring.)

Needing some 390x/390/380x/380 goodness...

(And hoping for a better amd cpu to magically appear before next spring...)
 
Back
Top