More 1660 TI Pictures Emerge as Launch Nears

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
As the 1660 TI's launch nears, more pictures of the unreleased GPU are leaking onto the internet. One Reddituser posted what appears to be a Galax GeForce 1660 TI, while Videocardz uploaded a picture of a Palit GTX 1660 TI StormX. An anonymous source told us that the card will launch on the 15th for around $279 back in January, but noted that the information "is likely in flux now and changing day to day" at the time. More recent rumors (which we haven't confirmed yet) suggest the card will launch on the 22nd.

Palit is preparing two models of StormX series. Both cards are equipped with TU116 GPU and 1536 CUDA cores. The GTX models from Palit feature 6GB of GDDR6 memory. The StormX OC variant features a clock speed of 1815 MHz, while the non-OC variant is set to default 1770 MHz.
 
More excess inventory more problems... Oh sorry I ment products.
 
Did it ever get determined what the performance difference between TU and GP cores are, apart from Turing/RTX (that presumably won't be present on 1660)?
 
WCCFTech was reporting this would go for $349 which made no sense given the 2060 is a thing. $279 sounds about right though.
 
If it really only trades with a 1070 it's not going to be very impressive and there will be another batch of unsold cards; saw 1070 on Newegg today at $299, so either they have to drop the price in the $230-240 range, or beat the 1070 by 5-10% to be a viable option.
 
Did it ever get determined what the performance difference between TU and GP cores are, apart from Turing/RTX (that presumably won't be present on 1660)?

It can be roughly estimated using the ratios between the RTX 2060 and Maxwell cards. The RTX 2060 has the same amount of cores as the GTX 1070 yet performs close to a GTX 1080. If you use the same ratio, the 1660 ti should perform very close to a GTX 1070 despite having a core count closer to the GTX 1060.

I actually think it will perform better than the GTX 1070 in certain dx12 titles due to the greater bandwidth as well as Vulkan due to how well Turing is doing there overall.

It should be a new performance/$ champ.
 
"1536 CUDA cores."

It never ceases to amaze me how my 6-year old card almost has double the amount of cores then a 2019 model. Regardless of where the 1660 is labeled as, the fact that in this amount of time how things have stagnated blows my mind.
 
"1536 CUDA cores."

It never ceases to amaze me how my 6-year old card almost has double the amount of cores then a 2019 model. Regardless of where the 1660 is labeled as, the fact that in this amount of time how things have stagnated blows my mind.

or it shows just how inefficient your card is that a card with half the amount of cuda cores out performs it.. that's not stagnation, lol..
 
It's one thing for AMD to give up on the high end, but it is another thing all together for them to let Nvidia take over the midrange.

The RX590 looks like a joke now compared to the 1660ti. It will lose in efficiency AND performance/$ the way things are looking.

Why AMD is not releasing something like a cut down Radeon VII with half the bandwidth and HBM 2 using 48 CUs is beyond me.

Even if it still lost to the GTX1660ti / RTX 2060 in performance/$, it would at least have the compute and workstation use with potential resale that would justify the extra cost.
 
"1536 CUDA cores."

It never ceases to amaze me how my 6-year old card almost has double the amount of cores then a 2019 model. Regardless of where the 1660 is labeled as, the fact that in this amount of time how things have stagnated blows my mind.

shader core counts have gone up slowly because just adding more cores alone doesn't scale well- there have been huge clock and ipc improvements as well. Also this is a low-midrange card that is much faster and cheaper than your 290X(?) (shot in the dark) while consuming half the power so doesn't say much about leading-edge progression.
 
While I doubt AMD can release a cut down Radeon VII, they absolutely need a new card to compete in mid range again.

RX 480 is so damn old now.

1660 ti is going to sell like hotcakes.
 
Why AMD is not releasing something like a cut down Radeon VII with half the bandwidth and HBM 2 using 48 CUs is beyond me.

Even if it still lost to the GTX1660ti / RTX 2060 in performance/$, it would at least have the compute and workstation use with potential resale that would justify the extra cost.

...because HBM2 and 7 no die are expensive
 
shader core counts have gone up slowly because just adding more cores alone doesn't scale well- there have been huge clock and ipc improvements as well. Also this is a low-midrange card that is much faster and cheaper than your 290X(?) (shot in the dark) while consuming half the power so doesn't say much about leading-edge progression.

So those guys suing AMD for bulldozer are going to go nuts here.
 
While I doubt AMD can release a cut down Radeon VII, they absolutely need a new card to compete in mid range again.

RX 480 is so damn old now.

1660 ti is going to sell like hotcakes.

Do you mean low end?

Polaris is low end.

Vega is mid-range (and, in case of Radeon VII, high end)
 
Do you mean low end?

Polaris is low end.

Vega is mid-range (and, in case of Radeon VII, high end)

$200-$250 cards are still midrange as there is about 3 tiers under that. This doesn't change because team green has about 2 ultra high ends and 2 High end cards. Vega could be considered upper midrange.
 
$200-$250 cards are still midrange as there is about 3 tiers under that. This doesn't change because team green has about 2 ultra high ends and 2 High end cards. Vega could be considered upper midrange.


People are confused.

When it said that Navi will be first released for the mid-range, that just means under the Radeon VII.
 
this should trade blows with an RX 590 ?

I think the rumor was appx 15% faster than the 1060. But this being "ti" maybe it would match a 1070. Honestly this whole generation hasn't been too impressive to me. Price/performance is not up to par as previous generations (just my opinion).
 
let me guess, it's faster than Radeon VII

Performance of VII, V64 or V56 have never been the issue, the issue have been TDP and no cheaper than Nvidia counterparts for the most part while some good deals show up from time to time.
Some (like me) picked one up at hilarious pricing but I feel 440$ at launch was acceptable price and not even good cause the hardware really isn't good.

GTX1660TI will compete with RX590.
 
shader core counts have gone up slowly because just adding more cores alone doesn't scale well- there have been huge clock and ipc improvements as well. Also this is a low-midrange card that is much faster and cheaper than your 290X(?) (shot in the dark) while consuming half the power so doesn't say much about leading-edge progression.
I’m not disagreeing at all, but the 290x is still a 1080p beast today. Probably has more to do with games not pushing the cards as much.



But yeah Ski looking at purely the amount of cores is pointless. It’s apples and oranges across architectures, and while I’ll jokingly always say “more=better than” you can just look at the Vega arch to know that more cores doesn’t equal more performance. Case in point Vega 56 and 1080ti both boast 3584 cores. They aren’t even in the same league as each other, and whichever 6 year old card you’re referring to is also well beneath what even the Vega 56 can do.
 
I’m not disagreeing at all, but the 290x is still a 1080p beast today. Probably has more to do with games not pushing the cards as much.

sorry, didn't mean to disparage the 290X- it is (and was) a great card! My first dip into 4K gaming- 30P, but it was 2014 after all- was on an OC'd fast and loud Sapphire reference model. :)

I might get flamed for saying this but I actually appreciate that Xbone/PS4 ports are keeping base requirements reasonable because it keeps good older cards relevant. I disagree with the notion that it's holding back gfx- big titles are still pushing the limits of new hardware, the downward scalability is just broader. imho
 
So those guys suing AMD for bulldozer are going to go nuts here.
right! haha they think it's complicated now
wait till they hear about how many different Cache/Scheduler/ALU/FPU configurations GPU architectures have!
 
Yes yes yes, but is there going to be a non-raytracing equivalent of the 2080ti? Don't give a crap about raytracing at this early stage, but want an upgrade for my 1080....
 
Yes yes yes, but is there going to be a non-raytracing equivalent of the 2080ti? Don't give a crap about raytracing at this early stage, but want an upgrade for my 1080....

nope the odds are definitely against you on that one..
 
Yes yes yes, but is there going to be a non-raytracing equivalent of the 2080ti? Don't give a crap about raytracing at this early stage, but want an upgrade for my 1080....
I highly doubt it, because Nvidia are unlikely to offer the consumer the chance to buy 2080Ti levels of (non-RT) performance at a lower price than the RTX 2080Ti, for the simple reason that that reduces the incentive for anyone to, er, actually buy an RTX 2080Ti. That might change once/if ray-tracing takes off, but right now Nvidia aren't going to cannabilise their own sales.

The reason that the 1660Ti makes sense is that it apparently doesn't offer equivalent non-RT performance to the RTX 2060, so if you want 2060 levels of performance (even if you don't care a fig for ray-tracing) then you have to pony up for one. The 1660Ti slots in below the 2060 regardless of ray-tracing, but it wouldn't make sense if the only differentiator between the 2060 and 1660Ti was ray-tracing alone, because given the choice who wants to pay more money for no actual real-world benefit?
 
I highly doubt it, because Nvidia are unlikely to offer the consumer the chance to buy 2080Ti levels of (non-RT) performance at a lower price than the RTX 2080Ti, for the simple reason that that reduces the incentive for anyone to, er, actually buy an RTX 2080Ti. That might change once/if ray-tracing takes off, but right now Nvidia aren't going to cannabilise their own sales.

The reason that the 1660Ti makes sense is that it apparently doesn't offer equivalent non-RT performance to the RTX 2060, so if you want 2060 levels of performance (even if you don't care a fig for ray-tracing) then you have to pony up for one. The 1660Ti slots in below the 2060 regardless of ray-tracing, but it wouldn't make sense if the only differentiator between the 2060 and 1660Ti was ray-tracing alone, because given the choice who wants to pay more money for no actual real-world benefit?
Oh I highly doubt it too... But a guy can dream.
 
Is this 1660Ti also going to be gimepd @ 6GB of vram or 8GB+ ?
All these cards have 192-bit interface, thus all these cards gets 3/6 GB RAM. NVidia no longer does what I have on my GTX 660. Connect 3x2 RAM chips to 192-bit, 2x1 to 128-bit. (8 chips 2 GB RAM. They probably used single 192-bit controller with ability to work as 128-bit controller when it accessed these 2 most distant chips.)

Nonetheless, using an array with two different speeds can cause problems, which is reason why NVidia allowed allocating whole RAM as single array only in 32 bit mode, lack of 32 bit address space forced arrays to be segmented, thus it easily allowed to keep each segment at its own speed. 64 bit OS allowed single address space, and as a consequence NVidia was forced to block creation of an array that would cross the space between fast 192-bit access, and slower 128-bit access.

Short version, for 64 bit OS it's too big hassle to bother with 8 GB RAM on 192-bit interface, and 9GB RAM would make mid end card more expensive, and could cut into profits from high end 8 GB RAM 256-bit interface using card.
 
Back
Top