GeForce GTX 1660 TI Analyzed

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
The Nvidia GeForce GTX 1660 TI is out, and one of out favorite Scottish YouTubers has already posted an analysis of the new GPU, along with some notes about the rest of the Turing lineup. Among other things, he points out that the GTX 1660's 284mm^2 die is relatively small compared to the massive RTX 2070, 2080, and 2080 TI dies. But the 1660 TI GPU is also significantly larger and faster than the GTX 1060, yet consumes almost the same amount of power under load, indicating that Nvidia have significantly improved Turing's power efficiency compared to Pascal in spite of the seemingly meager process advantage.

Check out the full analysis here.
 
  • Like
Reactions: c3k
like this
Navi is going to have to be so much better than GCN and come out soon.... Honestly I'm not sure why nvidia didn't wait a generation on RTX. If turing is ~30% better all things being equal they could have released 30% faster cards at the same price points as last gen and made everybody happy.
 
Navi is going to have to be so much better than GCN and come out soon.... Honestly I'm not sure why nvidia didn't wait a generation on RTX. If turing is ~30% better all things being equal they could have released 30% faster cards at the same price point and made everybody happy.

this is easy, they want the customer to beta test RTX and DLSS on their dollar. Its brilliant.
 
Navi is going to have to be so much better than GCN and come out soon.... Honestly I'm not sure why nvidia didn't wait a generation on RTX. If turing is ~30% better all things being equal they could have released 30% faster cards at the same price points as last gen and made everybody happy.
Because they got most of the people they thought would buy rtx at this point, and rather than waiting with empty production lines or building up stock that may not sell, they'd prefer to be making money now.
 
Honestly I'm not sure why nvidia didn't wait a generation on RTX. If turing is ~30% better all things being equal they could have released 30% faster cards at the same price points as last gen and made everybody happy.

That would only move the exact same problems to the 3000 series.

No one is forcing you to buy this generation. No matter when you introduce a major new tech like this it is going to be a pain point, as you are introducing technology that takes up a lot of die space and has no software to take advantage of it (chicken and egg problem).

There is no easy way around this.

The sooner you do it, the sooner you put that transition pain behind you. Now when they get the gen 2 RTX cards out, there will actually be some software to support them.

A 3000 series with 2nd gen RTX is going to be a lot better than a 3000 series with 1st gen RTX, because there will be software support.
 
That would only move the exact same problems to the 3000 series.

No one is forcing you to buy this generation. No matter when you introduce a major new tech like this it is going to be a pain point, as you are introducing technology that takes up a lot of die space and has no software to take advantage of it (chicken and egg problem).

There is no easy way around this.

The sooner you do it, the sooner you put that transition pain behind you. Now when they get the gen 2 RTX cards out, there will actually be some software to support them.

A 3000 series with 2nd gen RTX is going to be a lot better than a 3000 series with 1st gen RTX, because there will be software support.

Exactly this. When the GeForce 3 came out, it cost much more than the GeForce 2 but performed identically to a GeForce 2 Ultra. Its defining feature set, programmable shaders, didn’t provide usable performance by the GeForce 3, and even the GeForce 4 struggled. But without those cards leading the way, we’d still have DirectX 7 / PlayStation 2 level graphics.

It will take a few years for developers to get a handle on ray tracing tech, and for the hardware to handle it with good performance. But the hardware with the feature set must exist in the first place for any of that to happen.

And unlike in 2001 with the transition from fixed function to programmable shaders, at least the new cards offer a performance boost in existing rastered titles.
 
Exactly this. When the GeForce 3 came out, it cost much more than the GeForce 2 but performed identically to a GeForce 2 Ultra. Its defining feature set, programmable shaders, didn’t provide usable performance by the GeForce 3, and even the GeForce 4 struggled.
Huh?? Geforce 4 kicked major butts. And the Ti-4200 was a value leader for a long time, until AMD eventually released the 9700pro and 9800pro. Geforce 4's only weakness was that it didn't have the extra shaders in DirectX 8.1, like AMD's 8500 did. But the 8500 was not as powerful as a Ti-4200. and was dwarfed by a Ti4600. And there were only a couple of games with obvious effects missing, when using a Geforce 4. Max Payne 2 is one of them.
 
Huh?? Geforce 4 kicked major butts. And the Ti-4200 was a value leader for a long time, until AMD eventually released the 9700pro and 9800pro. Geforce 4's only weakness was that it didn't have the extra shaders in DirectX 8.1, like AMD's 8500 did. But the 8500 was not as powerful as a Ti-4200. and was dwarfed by a Ti4600. And there were only a couple of games with obvious effects missing, when using a Geforce 4. Max Payne 2 is one of them.

Thanks for 100% reinforcing my point. The GeForce 3 and Radeon 8500 were contemporaries, and both were no more competent than their DX 7 predecessors at DX 7 code, and in terms of DX 8 code (programmable shaders) they were pretty poor performers, useful as a platform for Carmack to build ID Tech 4 but that’s about it.
GeForce 4 was a rock star, at DX7 code.
But try playing Doom 3 or Half Life 2 on a GeForce 3, or Radeon 8500, or even a GeForce 4, and you’re going to have a bad time. It wasn’t until the Radeon 9700 and GeForce 6 series that DirectX 8 / 9 code was playable at an acceptable speed.
Programmable shaders was a huge technological leap in graphics programming and it took several generations of hardware advancement before hardware could run it at an acceptable level. Ray tracing is a similarly huge technological leap and once again you need hardware with the base capabilities to exist in order for devs to be able to build software, and then the hardware will still need to advance in order to play at the resolutions and frames that we are used to with current rendering technology.
 
That would only move the exact same problems to the 3000 series.

No one is forcing you to buy this generation. No matter when you introduce a major new tech like this it is going to be a pain point, as you are introducing technology that takes up a lot of die space and has no software to take advantage of it (chicken and egg problem).

There is no easy way around this.

The sooner you do it, the sooner you put that transition pain behind you. Now when they get the gen 2 RTX cards out, there will actually be some software to support them.

A 3000 series with 2nd gen RTX is going to be a lot better than a 3000 series with 1st gen RTX, because there will be software support.

True enough. It does get people started on software. Nextgen at 7nm should be more compelling as the extra transistors will be easier to accommodate from a die-size point of view and the clock/power advantages of 7nm will also have a performance uplift. Hopefully that's a quicker product cycle than the last one.
 
Exactly this. When the GeForce 3 came out, it cost much more than the GeForce 2 but performed identically to a GeForce 2 Ultra. Its defining feature set, programmable shaders, didn’t provide usable performance by the GeForce 3, and even the GeForce 4 struggled. But without those cards leading the way, we’d still have DirectX 7 / PlayStation 2 level graphics.

It will take a few years for developers to get a handle on ray tracing tech, and for the hardware to handle it with good performance. But the hardware with the feature set must exist in the first place for any of that to happen.

And unlike in 2001 with the transition from fixed function to programmable shaders, at least the new cards offer a performance boost in existing rastered titles.

I had a Gefortce2GTS and I upgraded to a Geforce3 Ti200 and I saw a good jump in performance. Plus it OCed way past Ti500 speeds easily
 
Thanks for 100% reinforcing my point. The GeForce 3 and Radeon 8500 were contemporaries, and both were no more competent than their DX 7 predecessors at DX 7 code, and in terms of DX 8 code (programmable shaders) they were pretty poor performers, useful as a platform for Carmack to build ID Tech 4 but that’s about it.
GeForce 4 was a rock star, at DX7 code.
But try playing Doom 3 or Half Life 2 on a GeForce 3, or Radeon 8500, or even a GeForce 4, and you’re going to have a bad time. It wasn’t until the Radeon 9700 and GeForce 6 series that DirectX 8 / 9 code was playable at an acceptable speed.
Programmable shaders was a huge technological leap in graphics programming and it took several generations of hardware advancement before hardware could run it at an acceptable level. Ray tracing is a similarly huge technological leap and once again you need hardware with the base capabilities to exist in order for devs to be able to build software, and then the hardware will still need to advance in order to play at the resolutions and frames that we are used to with current rendering technology.
Geforce 4 was 2 years old by the time Doom 3 came out. Which would be like a 4 - 5 year old card, nowadays. and on top of that, Doom 3 was a bleeding edge game, in terms of graphics tech. Its a DirectX9 class game, meant for Gerforce 6. Its next gen compared to anything Geforce 4 was meant for. So yeah, its no surprise Geforce 4 couldn't keep up. Doom 3 actually killed my BFG Ti-4200 with artifacting, which became permanent in all games.

Also, there were games which used DX8 and 8.1, before Doom 3. Games which run great on Radeon 8500/9000pro and Geforce 4. Such as the aforementioned Max Payne 2. Even thought my Ti-4200 was quite a bit more powerful than my 9000pro (an dx8.1 card derived from the 8500, despite the "9" naming scheme), Max Payne 2's actuall gaming experience felt the same. and it looked better, because the 9000pro could display the extra 8.1 shaders.
 
Last edited:
Back
Top