Separate names with a comma.
Discussion in 'nVidia Flavor' started by Gatecrasher3000, Feb 12, 2018.
ye muthafuckas today dey need that DVI port!
I would buy another FE no doubt tho. As thermals will be even better then pascal as it is new litography? I got like really agressive fan curve and i hit 70c at full load. and my oc it can go up to 1987.. but mostly it's 1962... Coz i game with headset anyway so i wont hear anything. Im not hitting thermal limits just silicon limits prety much. If 1180 is same increase as 980ti to 1080 ti i will buy it but, if not i wait for Titanium.
Atleast nVidia, despite their prices. they go as much as they can and they really add a respectable amount of increase in performance to the degree that even the ppl with last tier cards, matching. will see a huge improvement.
What makes you think miners won't simply continue to add capacity? As long as the profitability and ROI is there they will continue to buy and add cards. When you're essentially printing money what motivation is there to not do more?
Yep. Even if they reach the power capacity of your home they will just start upgrading their slower cards.
LOL, 50-60% gain? LOL, that's not happening.
2080 beating 1080 with 50-60% is very possible.
Why not, it pretty much did over the last two generations 780 > 980 > 1080
Funny a Titan V cant even beat a 1080ti by much more then 15% and usually less then that. I dont see a 2080 doing much better then 20% faster then a 1080, Pascal was a homerun from a engineering standpoint, so expecting another one is just not happening.
It beats it in Machine Learning/Compute which it's designed for. TitanV isn't a gaming card, they priced its like the Titan Z so gamers don't buy it. They don't list it on GeForce.com like the other Titans for a reason.
It's happened every architecture change in the past 8 years, I don't see why it would stop now. GP104 vs GA104 we will see a minimum of 50% improvement.
It's not a gaming card to begin with and on top of that its a reduced GV100 with 3/4th the memory speed. And I see you already forgot Maxwell for example. AMD is 3 generations behind in April on the gaming front, just as they are now in the HPC segment.
Why are you talking about AMD in a Nvidia thread nor did I mention them. Gaming and compute are nearly the same paths the only difference is the removal of the tensor cores and memory speed will only make a tiny difference. I expect many to be disappointed especially if they think a 50% gain is coming, it will be a minor update that I think many will pass on. You sound like Razor did before the Titan V came out and we saw numbers, he expected a large gain and it was not there. Expecting the gaming version to be miles ahead of the the compute version is just not likely at all. We shall see how it turns out around July or so.
To be fair, Kepler to Maxwell was only 30-40%, but we were at the end of the 28nm life cycle at that point.
The Titan V isn't even a Geforce card. Also there is a lot more than Tensor cores, FP64 for example. At the same time memory bandwidth will see massive increase due to GDDR6. Not to mention the clocks due to being a gaming card.
Gaming and compute is very different in the proportions of a card.
Kelper->Maxwell->Pascal and now this. History will continue. Funny how it works out when you can fund the R&D.
And now the card are out in July? Oh the bitterness you hold
Memory bandwidth also didn't change. It actually went backwards for the non TI.
Not a great time to build or throw something together....but is a pretty rad time with awesome hardware and that's going to continue assuming.
FP64 is in Pascal as well, GDDR6 will help some but that is assuming a memory bottleneck is a issue. I have seen higher clocked Titan V cards granted not the gaming card, it didnt help much. Rumor is August Q3 launch, I am actually being generous with my thoughts of a July launch. Bitterness is all you and always has been.
I'm looking to upgrade from a 980ti but will hold off if the upgrade isn't big enough. Last go around the 1070 was comparable to the 980ti with the 1080 being 10-15% faster. I would need the 2080 (or 1180) to be about the same upgrade from a 1080ti to with worth while for me. If not then I'll wait for the next ti, or maybe pick up a used 1080ti if prices come down.
Yep. I remember kids crying in corners and locking themselves in bathrooms, whimpering: "Thousand dollar 1080's!" Didn't happen, Nvidia isn't that short sighted. Running performance circles around AMD while also keeping the price low enough is exactly where they like to be. They're that confident.
You are gasping at straws. Now you even compare 6000-7500Gflops of FP64 to ~350Gflops.
So now July is generous of you. Please tell what you really expect then.
Your the one that posted it like Pascal did not have it, I corrected you. I have no doubt they will make changes for the consumer card but it just wont help as much as you think it will.
TItan V is ROP limited, already discussed this in the Volta thread by cutting cuda cores and one bus bank, the ROPS are attached to the memory banks, and they had to put fairly low clocks to keep it in a certain TDP due to the humongous die and all that extra non gaming silicon, it actually has less pixel filtrates than a 1080ti! Both have close to the same counts on ROP's 96 vs 88 but the Titan V has much less mhz, so effectively less filtrates.
Its 20% faster than a 1080ti while being ROP limited!, that is why in pure compute tests, its 50-100% faster depending on the application.
We have seen many times, nV's GPU's scale very well with increased core counts and with other parts scaling proportionately too. This time, they couldn't do that with the Titan V, it has so much more shader potential but no extra filtrates (less as I explained before).
Also look at the arrangement of cuda cores in the smx's, nV compute cards have half the amount of cuda cores per sm vs the gaming cards, essentially its a different cache and core layout, the ASIC is different! This will make a big difference in driver optimizations for games. I expect to see the same in Volta vs what ever the gaming GPU's will be called.
This also shows why nV's current architectures are more, what is AMD trying to strive for with Navi?, scalable, nV's architectures are scalable across many different needs.
I am not convinced it's as ROP limited as you are tho, the clock speed is no doubt hurting it. As for the 20% I have also seen the 1080ti tie or beat the Titan V as well, both are outliers tho and one would expect it to win in pure compute. I just dont see this 50% increase coming, I think 20% is far more likely as I just dont think they will get the clocks they need for a large improvement. On the plus side if I am wrong then we end up with a better card we can buy, shame it likely wont be cheaper tho.
I'm 100% positive it is, we can see what resolutions does with Titan V vs 1080ti lol, what does resolution affect? Pixel fillrates. That is the only metric that has an equalizing balance between the 1080ti and Titan V. And the titles that shows this well are more pixel fillrate limited vs any other part of the GPU.
Look at these titles.
Look at FPS in relationship to resolution. You can see the Titan V is getting bottle necked.
You can't say its not because as the resolution gets higher there is an equalizing effect. Now is it a CPU bottleneck? I don't think so because that would equalize all numbers for all of the graphics cards, there is only one place to look, on the GPU, what on the GPU is bottlenecking? Simple.
To be more obvious than that
A ~13% overclock is giving the Titan V 19% performance boost, yeah its bottleneck and its heavily bottlenecked in titles by its fillrates. The only time we will ever see a performance increase over the overclock amount is if there is a bottleneck being alleviated.
This happens in more than one game to boot.
Look at the .1% and 1% tho, it's horrible on the Titan V in Doom. It also happens in Grand theft Auto and a few other titles, that just does not scream a ROP issue to me. I mean heck a Vega 64 running at stock has better frame time then a Titan V at stock. I have seen it do that in other benchmarks but not all the time, this is why I tend to discount the ROP's as a issue. Then there is stuff like this.
You even have history against you. This is what R&D gives. Memory bandwidth alone will increase something like 50-60% due to GDDR6. CUDA cores up 50%.
The Titan V is still not a gaming card. It may not even have gaming optimized drivers in any way because it´s not the target audience. Its the poor AI researchers card. The Geforce Titan will be 102 based. And the card is certainly limited in fillrate. You own example isn´t making your argument better as compute increases vs fillrate.
On paper the ROPs on the Titan V is slower than the Titan Xp.
I have don't know how much truth is in this article but rumor has it these cards will be LOCKED for MINING so basically shutting the Crypto Miners out.
It´s pretty much impossible to lockout miners due to using a lot of the same compute. And then they can just make a new coin etc.
I don't care about the 1%,.1% lows, 1% and .1% lows in this context and you shouldn't either , they are driver and software driven. We see that in every game its different from vendor to vendor, different hardware gens. We also see that it can be fixed in driver tweaks too. These have nothing to do with fillrates. On screen fillrates remain fairly flat through out the entire game, ya only have so many pixels to cover! As long as pixel overdraw isn't happening too much.
What you might actually be seeing here is just the power delivery being increased enough to normalize the performance. The stock numbers are clearly throttling and limiting performance in stock scenarios, hence the 'artificial' bottleneck look in the graph.
Now that it seems (relatively substantiated) that production won't be starting until June, with the announcement some time in June with launch in July (or August)_ for Ampere, I've decided to hold onto one of my 1080 tis.
The bright side from the longer wait is the crypto market could have somewhat stabilized by then thus bringing prices down a bit from bonkers levels.
We must have different definitions of “substantiated”. All I’ve seen are rumors from shitty websites with long histories of talking out their asses for page clicks.
The lack of any real information or even crappy chinese leaks should be a clue that it wont be soon. July or August would be the most likely launch months if things start to leak around May. Also Nvidia has made it clear they are in no rush.
The reviewers would be able to see the throttling if that was the case, they made sure the card wasn't throttling to begin with.
Even gamers nexus didn't have much of a clock problem at stock.
And added to this, if then the bottleneck is still there and that is why we see the % difference of overclocked vs regular clocks. Bottleneck is being relieved.
In anycase Volta is actually boosting above stock boost then going back to its rated boost
This is from PCper.
Its easily hit its rated boosts.
So when overclocking and increasing its thermal levels too, its relieving the bottleneck that is present, which is the fillrate problem.
Over at anandtech they checked clocks speeds too in gaming
Every single one of the games goes well above its rated boost clocks.
"We're sneaking out a major launch without much to-do because a) there's no competition and b) our current GPU stock is selling 30% over MSRP" -- said no nVidia rep ever.
Looks like Vega 2 . It will be release end of 2017, no CES, wait it will be GTC, ahmm April but not April Fools, wait it will be July . . . Bottom line Nvidia has not said anything yet on release date. I would expect something to be said around GTC. Plus GDDR6 is not being used in phones - Nvidia should have virtually exclusive use of this tech - if Nvidia is still using DDR5(x)?
GDDR is also used in network devices etc. There are even network optimized versions of GDDR5 and GDDR6 called GDDR5N and GDDR6N.
GDDR6 is in full production and that's a big hint by itself. Because there is only one high volume segment currently for it.
I haven't looked at video cards in months but even I remember the Titan V is a piss poor indicator.
What I would do if I had the time is find a test with it watercooled and hardmodded power limit away. Then figure out how much of the die is fp64/tensor cores. Then assuming ROP impact isn't huge figure out performance per die mm^2 vs Pascal.
The Titan V using a dual slot blower and people quoting tests from that config...