Nvidia 3000 Series pricing and specs

What I am dying to know is the physical size of these cards, both the 3080 and the 3090. I have a new build using a Fractal Design Node 804 case and for the graphics card I am limited to 290mm if I want to keep one of the front fans in place or 320mm if I am willing to remove one of the front fans. I am leaning towards the 3090 but it really comes down to space.
Nvidia finally posted their specs on their website. The 3080 is 285mm long (112mm wide) and the 3090 is 313mm long (138mm wide). The 3080 is listed at 320w while the 3090 is listed at 350w. Both recommend a 750w PSU.

As for CUDA cores and clock speed. The 3080 has 8,704 CUDA cores and a boost clock of 1.71GHz. The 3090 has 10,496 CUDA cores and a boost clock of 1.7GHz.
 
The 3080 has 8,704 CUDA cores and a boost clock of 1.71GHz. The 3090 has 10,496 CUDA cores and a boost clock of 1.7GHz.

Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.

Link for those interested: https://www.nvidia.com/en-us/geforc...id=nv-int-cwmfg-49069#cid=_nv-int-cwmfg_en-us
 
I'm curious if Big Navi announcement was waiting for today. I'm thinking AMD will announce before the Ampere cards are available. If not, they're probably not competitive. Thoughts?
 
I'm curious if Big Navi announcement was waiting for today. I'm thinking AMD will announce before the Ampere cards are available. If not, they're probably not competitive. Thoughts?
AMD has to do something, Jensen just bullied the hell out of their perf/$ arguement in the highend section. It's like being in a fight at least scratch his ass.
 
Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.
The 3070 has 5,888 CUDA Cores, a boost clock of 1.73GHz, and has a length of 242mm (112mm width).
 
I just wish they priced the 3090 to $1300 max. The 2080ti was around $1200 at launch right?

Correct. The cheapest 2080ti came in at $999 for the EVGA Black Edition but that was released months after the regular 2080tis hit the market.
 
I'm actually kind of surprised at how close the boost clock is for all three models, especially for the 3090. The Titan and Ti's of the past typically had lower peak clocks than the lower models. To only be a difference of around .03GHz (30MHz) between the lowest and highest card is pretty sweet. Now the base clocks have a larger spread, but my guess that has to do with targeting certain power savings goals along with the higher models have way more CUDA cores to handle the workload at a lower clock speed.
 
The 3070 has 5,888 CUDA Cores, a boost clock of 1.73GHz, and has a length of 242mm (112mm width).

I know... vs. my 2080ti which is;

4352 CUDA Cores, Boost Clock of 1.6GHz, and 269mm long. If the 3070 price rumors are true then the 3070 will be a absolute beast of a bargain.
 
Correct. The cheapest 2080ti came in at $999 for the EVGA Black Edition but that was released months after the regular 2080tis hit the market.
Yeah, and with the 3090 FE starting at $1499, that is certainly going to put a hurting on some wallets out there. But considering that it replaces the Titan line and has 24GB of memory, the price isn't bad even though it is more expensive than the outgoing 2080Ti FE when it was released. The Titan RTX still goes for $2,499.

Since I have a 4K monitor and also do VR, I am begrudgingly leaning towards the 3090. I'm still using my GTX 1080 FE from when I bought it back in May 2016. My goal is to have a card that lasts me at least another 4 years.
 
...hitting the nail on the head. CUDA core count is way more important than boost clock.
Looks like Nvidia is counting 2x for actual cuda cores since Ampere can do 2 operations per clock on the shaders which Turning could do 1 operation per clock. This should greatly increase rasterization speeds and wonder why that was not highlighted or emphasis as much? Having a 360hz monitor, new G-Sync and then playing marbles at 30fps at 1440p I don't think many would want to do for any length of time, even being as pretty as it is.
 
1060 to 3070

That'd be a solid upgrade, right?

Just basic gaming 1080 @ 60. COD is probably the most graphically intense game I play.
 
Looks like Nvidia is counting 2x for actual cuda cores since Ampere can do 2 operations per clock on the shaders which Turning could do 1 operation per clock. This should greatly increase rasterization speeds and wonder why that was not highlighted or emphasis as much? Having a 360hz monitor, new G-Sync and then playing marbles at 30fps at 1440p I don't think many would want to do for any length of time, even being as pretty as it is.

These announcements are more gamer focused. For the nerds among us we have to wait for technical articles to get the nitty gritty details.
 
Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.

Link for those interested: https://www.nvidia.com/en-us/geforc...id=nv-int-cwmfg-49069#cid=_nv-int-cwmfg_en-us
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
 
I'm actually kind of surprised at how close the boost clock is for all three models, especially for the 3090. The Titan and Ti's of the past typically had lower peak clocks than the lower models. To only be a difference of around .03GHz (30MHz) between the lowest and highest card is pretty sweet. Now the base clocks have a larger spread, but my guess that has to do with targeting certain power savings goals along with the higher models have way more CUDA cores to handle the workload at a lower clock speed.

That's what you get for samsung 8nm. Remember Turing was essentially an enhanced 16nm so the process had aged and improved allowing clock speeds up over 2ghz. It will be interesting to see if Nvidia stays the course with Samsung, or goes back to TSMC where they will have better improvements than samsung resulting in higher core clocks. Just look at the PS5 clocks as an example, beastly for the wattage and cooling that thing is going to be dealing with.

The GDDR6 really gimped them on memory amounts, it's too bad they weren't able to get to 12 or 16gb on closer to mainstream cards. Something with performance close to the 2070 and 16gb of memory for 500 bux will be hard to pass on for many games that still don't give a shit about RTX.

After seeing them pump the gas on the streaming elements they will be enabling through their software stack, I know a lot of fledgling streamers will be interested, I know my son will want one.

I'm really dissapointed we don't get any benchmarks today. There was one screen that really grabbed my attention, the performance/watt:

20200901171847_575px.jpg

Here we see at 240w, turing is hitting 60 fps, with ampere at say 90 fps. That would give us the 33% performance gain typical of a recent refresh, not Entirely new Fab Node + architecture. At 320w, we see about 105fps for ampere, which would be 43% increase over turing. At (assuming) 350w (3090 spec) that would equate to a little over 50% faster. Now there are other considerations, the graph says Control at 4k, but doesn't mention ray tracing, which I would assume is off. So this is just a shader snapshot. I think that going on Samsung hurt them power wise, as they would have been able to get closer to these improvements near the same power envelope otherwise.

For those of us that aren't dying for more raytracing, and still want enough shader power, It's not as much as the presentation would have lead you to believe, yet still acceptible. I guess we'll never really see days of 70% straight increase watt for watt in raw fps like we did in older architectures. What are your thoughts on this?

The only card I can see myself getting is the 3090rtx. The 10gb memory on the 3080, pains me to no end, especially with what is on offer from console mfrs. I understand why they had to do it with the type of chips they wanted to use (see anandtech writeup about this), but puleeeeeeze, it's still 700 bux for the cheapest card 10gb 3080.

However, I'll probably be buying my son a 3070 when they get released, as I flipped my 2080ti last week for almost what I payed for it on launch.
 
After seeing them pump the gas on the streaming elements they will be enabling through their software stack, I know a lot of fledgling streamers will be interested, I know my son will want one.
A lot of people are going to discount the additional hardware and software stack that enable features for livestreaming... but this is technology that pushes capabilities wider. Previously you'd want twelve or sixteen cores to do that, a treated room, a sound setup that rivals the price of the compute hardware, and so on.
 
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
Oh for sure. I'd be super pissed too. I don't have any regrets either with the 2080ti. I had a 2080 initially and did a Step-Up with Nvidia when the TIs came back in stock and I regret nothing. It's still an insanely powerful card and will be good for me at least for another couple years.
 
Last edited:
I guess we'll never really see days of 70% straight increase watt for watt in raw fps like we did in older architectures. What are your thoughts on this?

This has been known for ages.

Though it does look like we will get 70%+ increase perf/dollar, and I think a lot more people care about that.
 
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
When they know a new GPU is on the horizon and they buy the top end anyway, that's kind of their fault.
 
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.

I bought a 2080S about 2 months ago since my 1070 just couldn't handle Modern Warfare or EFT @ 3440x1440. As it turns out, it was super helpful for Assassin's Creed: Odyssey and Horizon: Zero Dawn as well. So, given that I was able to comfortably play those instead of the slideshow I would have otherwise experienced, I don't feel that bad.
 
I'm thinking any Ti cards will be reserved until after the performance of NAVI-2 is known.
Should Nvidia later need to slot a card in between the 3070 and 3080 to counter, you'll probably see a 10-12 GB RTX-3070Ti .
Likewise, if NAVI-2 slots in between the 3080 and 3090 you'll probably see a 12-16 GB RTX-3080Ti.
In any event...given what we now know, AMD will have to turn up with something spectacular to the party.
I really hope they do.
Nvidia's pricing seems to be 'thumb-in-the-eye' personal, as if to say... 'Don't challenge me...ever. I'm going to kick your a*s regardless'
 
Last edited:
Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.

Link for those interested: https://www.nvidia.com/en-us/geforc...id=nv-int-cwmfg-49069#cid=_nv-int-cwmfg_en-us

I think Nvidia is playing with the Cuda Core numbers a bit, as in using a different definition than the 20x0 series. I'm also guessing their "x is 2 times faster than y" benchmark in the presentation was probably in RTX. I have a hard time believing a 3080 is that much faster than a 2080ti in non-RTX games. Guess we'll have to wait for the benchmarks.

Side note - it's sad to not have HardOCP around to benchmark these cards. I know Kyle has a lot of stuff going on, but I really wish he could fire up the site again.
 
I think Nvidia is playing with the Cuda Core numbers a bit, as in using a different definition than the 20x0 series. I'm also guessing their "x is 2 times faster than y" benchmark in the presentation was probably in RTX. I have a hard time believing a 3080 is that much faster than a 2080ti in non-RTX games. Guess we'll have to wait for the benchmarks.

Side note - it's sad to not have HardOCP around to benchmark these cards. I know Kyle has a lot of stuff going on, but I really wish he could fire up the site again.

Digital Foundry put out a video that shows the 3080 running about 80% faster than a 2080 in non-RTX games. In RTX situations, it got a little closer to the 2x number Jenson went on about. Either way, 80% over a 2080 is ridiculous. A 2080ti is what, 30% faster than a 2080? So you’re still looking at a 40-50% increase over a 2080ti. That is massive.
 
Digital Foundry put out a video that shows the 3080 running about 80% faster than a 2080 in non-RTX games. In RTX situations, it got a little closer to the 2x number Jenson went on about. Either way, 80% over a 2080 is ridiculous. A 2080ti is what, 30% faster than a 2080? So you’re still looking at a 40-50% increase over a 2080ti. That is massive.


Holy shit, really? Thats frickin nuts. in real world gaming?
 
Holy shit, really? Thats frickin nuts. in real world gaming?

In both in-game benchmarks and in gameplay for games w/o a built in benchmark.

edit: Keep in mind these are Nvidia chosen games, so they may be absolute best case scenario. It’s worth noting that DF mentions that and still says they’d ever really seen this big of a generational jump before.

 
Looks like I'll be selling/upgrading my EVGA 2080S hybrid and going to a 3080. Just waiting for EVGA's offering
 
I just want a 3080 Hybrid. Or Ti / 20GB version if that comes out when I'm ready to buy, because there's a new monitor involved that's going to cost even more... and probably a new case as well.
 
That's what you get for samsung 8nm. Remember Turing was essentially an enhanced 16nm so the process had aged and improved allowing clock speeds up over 2ghz. It will be interesting to see if Nvidia stays the course with Samsung, or goes back to TSMC where they will have better improvements than samsung resulting in higher core clocks. Just look at the PS5 clocks as an example, beastly for the wattage and cooling that thing is going to be dealing with.

The GDDR6 really gimped them on memory amounts, it's too bad they weren't able to get to 12 or 16gb on closer to mainstream cards. Something with performance close to the 2070 and 16gb of memory for 500 bux will be hard to pass on for many games that still don't give a shit about RTX.

After seeing them pump the gas on the streaming elements they will be enabling through their software stack, I know a lot of fledgling streamers will be interested, I know my son will want one.

I'm really dissapointed we don't get any benchmarks today. There was one screen that really grabbed my attention, the performance/watt:

View attachment 275291

Here we see at 240w, turing is hitting 60 fps, with ampere at say 90 fps. That would give us the 33% performance gain typical of a recent refresh, not Entirely new Fab Node + architecture. At 320w, we see about 105fps for ampere, which would be 43% increase over turing. At (assuming) 350w (3090 spec) that would equate to a little over 50% faster. Now there are other considerations, the graph says Control at 4k, but doesn't mention ray tracing, which I would assume is off. So this is just a shader snapshot. I think that going on Samsung hurt them power wise, as they would have been able to get closer to these improvements near the same power envelope otherwise.

For those of us that aren't dying for more raytracing, and still want enough shader power, It's not as much as the presentation would have lead you to believe, yet still acceptible. I guess we'll never really see days of 70% straight increase watt for watt in raw fps like we did in older architectures. What are your thoughts on this?

The only card I can see myself getting is the 3090rtx. The 10gb memory on the 3080, pains me to no end, especially with what is on offer from console mfrs. I understand why they had to do it with the type of chips they wanted to use (see anandtech writeup about this), but puleeeeeeze, it's still 700 bux for the cheapest card 10gb 3080.

However, I'll probably be buying my son a 3070 when they get released, as I flipped my 2080ti last week for almost what I payed for it on launch.
Here are my thoughts on it. I am not disappointed in the clock speeds. In fact, I am quite impressed, more so for the 3090's clock speed seeing that its boost clock is so close to the 3070 and 3080 at only a 30MHz spread. Considering that the transistor count increased by 50%, CUDA cores at least doubled, and who knows about the actual number of RT and Tensor cores, to see them hold on a higher clock value is something. Now what will be interesting to see is how much higher the cards are able to boost while maintaining thermal targets, especially with this new cooling solution. The Founders Editions of the 1080 Ti (16nm) had a stock boost clock of around 1.58GHz, the 2080 Ti (12nm) at 1.63GHz, the Titan Xp at 1.58GHz (16nm), and the Titan RTX (12nm) at 1.77GHz. Going from the 16nm to 12nm fab didn't net too much of a gain clock speed-wise. I wouldn't have expected going from 12nm to 8/7nm would have netted much either, especially when factoring in the increase in complexity of the chip as well as maybe them focus on other areas of optimization versus going purely for clock speed (almost the Intel vs AMD argument currently going on). While I agree that the TSMC 7nm may have been a little more efficient than the Samsung, I don't think the end result difference was going to be huge considering the sizes of these chips versus the previous generations.

My 1080 FE was rated at a boost clock of just 1.73GHz but have been able to get it to boost to around 2.025GHz or so and comfortably hold around 1.93-2.0GHz. And that was with a GPU that really took some effort to squeeze anything out of and was rocking around a quarter of the transistors and a quarter of the CUDA cores. I'd be curious to see if we can expect at least the same level of overclockability or better. The 20 choke power setup will hopefully with this providing the cooling solution can hold up and Nvidia doesn't artificially hold back too much.

I agree that I think if anything is shooting the 3080 in the foot is its 10GB memory and it is one of the reasons I am leaning towards the 3090 if I can squeeze it into my case. While I think 24GB is overkill for what I ever plan to use it for, it would be interesting to see if developers find ways to leverage the extra headroom. 16GB I think would have been the sweet spot if it was possible.

My initial plans was to order a cooling block for it and convert the board to water cooling, which should save quite a bit of room in the case. But this new design (and thermal envelope on top of that) is leaving me a bit nervous of doing the work myself and tying into my custom cooling loop. But if the stock cooling solution works well and is quieter like they say, then maybe I would be inclined to leave it alone for the time being. The system I am building will have two 240mm x 54mm radiators that was originally going to be used to cool both a Ryzen 9 CPU and a single GPU. Looks like it may be just cooling a CPU for now.
 
Back
Top