Could the 980ti/Titan X be the fastest GPU for a year?

A year or more until the next GPU king?


  • Total voters
    79

PRIME1

2[H]4U
Joined
Feb 4, 2004
Messages
3,942
The Titan X launched in March.

Fury X was unable to knock it's crown off.

So does anyone think there will be a new leader by March 2016?

Rumors are swirling about 16nm being ready before then and Maxwell will be 2 years old at that point. This will be one of those polls we can look back on in 12 months and see how we did. Because it does not look like we are really going to have much to discuss for a looooong time.
 
Probably not. Fury X was the big chip until 16nm. Unless we're talking about manufacturer OC'ed variants the likelyhood of something new coming out is doubtful. Possible very quick 16nm expansion in Q1 2016 is about the closest I'd expect something to come by and solidly dethrone Fury X/Titan X.

I think 16nm is ramping up Q3 of this year so that's still very much possible. Depends on how long both sides want to keep their products on the shelf to be honest.
 
I took GPU as a single GPU not a dual GPU card.

I think it'll be at least a year. I expect the node shrink to start small then move up in die size. I could be wrong and we get a race between nVidia and AMD, they throw good pratices to the wind and go ape shit. I know at least some of is think 2017 for "big Pascal". I'd expect end of 2016 to surpass the Titan X personally.
 
There should be an x80 release between November 2015 and January 2016, because Nvidia reliably releases x80 cards on a 15 month cycle, give or take a month (the sole exception being the 480 which was 6 months late).

It won't be Pascal, unless everybody and their mother is wrong about how soon Pascal is coming, so it should be similar to what happened with the 780, which used the GK110 from the original Titan with higher clock speeds. I would expect the next x80 to use the Titan X's GM200 with higher clock speeds, just in time for the holiday buying season.
 
...

It won't be Pascal, unless everybody and their mother is wrong about how soon Pascal is coming, so it should be similar to what happened with the 780, which used the GK110 from the original Titan with higher clock speeds. I would expect the next x80 to use the Titan X's GM200 with higher clock speeds, just in time for the holiday buying season.

IDK about that. NVidia is under no pressure to release a new card. If I was them I would ride the wave of 980Ti/Titan X until Pascal. Why spend money where it's not needed? If Fury X had performed as well as AMD had claimed it was going to I think NVidia might have released a "Titan X Black" or "GTX 985".


I think if anything, we are likely to see a "GTX 990". 2 full GM200 cores on 1 card.
 
IDK about that. NVidia is under no pressure to release a new card. If I was them I would ride the wave of 980Ti/Titan X until Pascal. Why spend money where it's not needed? If Fury X had performed as well as AMD had claimed it was going to I think NVidia might have released a "Titan X Black" or "GTX 985".


I think if anything, we are likely to see a "GTX 990". 2 full GM200 cores on 1 card.

Exactly my thoughts. The only reason I wanted AMD to pull out a win with fury was so it might put some pressure on nvidia. The way things are, nvidia can sit back and kick their feet up for the foreseeable future.
 
Why have the video cards been on such terribly dated 28 micron when cpus have been smaller for so long? This seems odd to me
 
Honestly after DX12 is out the dual solution on a single PCB is going to be standard. With DX12 utilization and overall hardware utilization as a whole it should be widely accepted. With AAA titles almost always supporting SLI/CF why would you not jump on a dual solution.

I also think drivers will improve greatly with the release of DX12 on both sides.
 
Why have the video cards been on such terribly dated 28 micron when cpus have been smaller for so long? This seems odd to me

Because it takes billions of dollars to switch to a new process and actually takes time? Maybe those reasons?
 
Two problems with your statements. CPU architecture isn't defined by microns anymore, but nanometers, abbreviated "nm". It has been this way for at least a decade. Intel can make 14nm dies because their CPU's are tiny in terms of die area compared to a high-end GPU die. The bigger the chip the harder it is to have good yields at the fabrication facility. Also, Intel has more R&D budget for semiconductor crap than anyone else...
 
Thank you for the correction Wade,

I saw some articles that were talking about 16 nm video cards next year, are they jumping straight from 28 to 16?
 
Why have the video cards been on such terribly dated 28 micron when cpus have been smaller for so long? This seems odd to me

Because it takes billions of dollars to switch to a new process and actually takes time? Maybe those reasons?

Because TSMC is not the greatest at keeping up with fab transitions. Nvidia knew this and even called them out years ago here.

To be honest after 14/16nm don't expect another shrink for years because even intel had problems getting their 14nm up and running and may be delaying 10nm processors and no one touches intel in terms of R&D for foundries and fab processes. Samsung is catching up but they still have a ways to go before they are even close to intels progress TSMC is nothing compared to those two though.
 
Because TSMC is not the greatest at keeping up with fab transitions. Nvidia knew this and even called them out years ago here.

To be honest after 14/16nm don't expect another shrink for years because even intel had problems getting their 14nm up and running and may be delaying 10nm processors and no one touches intel in terms of R&D for foundries and fab processes. Samsung is catching up but they still have a ways to go before they are even close to intels progress TSMC is nothing compared to those two though.

Thats basically what happens when a company that designs their chips around a fab process built by another company.
 
The title says the fastest gpu, doesn't matter if its dual or not.
They should change it to the fastest single gpu.
 
The title says the fastest gpu, doesn't matter if its dual or not.
They should change it to the fastest single gpu.

Technically wouldn't matter. When you see the OP then you see this for what it is, flame bait. Even option 3 speaks to the not-scientific basis. Lol
 
All I see in your link is that it struggles to keep up with dual 980s (and sometimes even a single Titan X) while drawing so much more power it's not even funny. And well, no frametimes/frame pacing metrics or anything so perfectly pointless benchmarks as far as multi GPU is concerned.

All i see is an old card beating newercards. I still have my 970 but 295x2 still the fastest card in the world.
 
There are multiple ways of measuring performance and i voted no in the poll because it wasn't specific enough.

Had it been worded more precisely (game performance, better performance across a large selection) I probably still would have voted no because who's to say that Nvidia or AMD won't release some ridiculously overclocked GPU by December 31, power consumption be damned.
 
The answer is, obviously, yes.


Reviews have clearly shown that Fury X has failed to beat the 980ti and Titan X.


If VRAM stacking under DX12 pans out, then it will be interesting to see re-reviews of all the flagship and halo products.
Fury X Xfire owners: "8GB stacked, yo!"
980 SLI owners: "Same for me!"
970 SLI owners: "Same! well, kinda..."
980ti SLI owners: "*yawn* Uh, great. 12GB here."
290X/390X 8GB Xfire owners: "16GB. Losers."
Titan X SLI owners: "Please, bitches..."
 
Well hopefully if we don't see anything new we see a price war because these $600+ high end cards are kind of ridiculous to me. The only thing that makes them palatable is the fact that gaming just hasn't really progressed much and CPUs are stagnant meaning we have not much else to spend our money on.
 
Back
Top