RTX 3050 Ti for laptops only? What is Nvidia thinking?

wandplus

Limp Gawd
Joined
Jan 14, 2020
Messages
370
What is the possible rationale. I mean they re-introduced the GTX 1650 but no RTX 3050 Ti for desktop?
 
where you getting that from? nvidia hasnt said so and if you listen to twatter pallet has 25 skus for 3050ti based cards.
Twatter? Pallet?

I just wanna say here that I have found pendragon1 to be an astute observer of the scene. And hey, I'm originally from NY, and we don't have the time to say nice things that aren't really true.
 
Twatter? Pallet?

I just wanna say here that I have found pendragon1 to be an astute observer of the scene. And hey, I'm originally from NY, and we don't have the time to say nice things that aren't really true.
thanks i try not to talk out my ass too much :) and its Palit, forgot how to speel it. saw this:
https://www.techradar.com/news/nvid...d-rtx-3080-ti-desktop-gpus-are-on-the-horizon
and it leads to this:
https://twitter.com/KOMACHI_ENSAKA/status/1392537698204676099?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1392537698204676099|twgr^|twcon^s1_&ref_url=https://www.techradar.com/news/nvidia-geforce-rtx-3050-ti-and-rtx-3080-ti-desktop-gpus-are-on-the-horizon
these are all supposedly Palit skus for various desktop 3050ti's
1621560745785.png
 
There's certainly going to be a desktop 3050Ti

Keep in mind that with this GPU generation, NV (and AMD) are back on their BS of the mobile chips not actually being the same config (or even the same die) as the desktop parts with the same name. So, I'm confident there will be a 3050Ti for desktop, but it hasn't come up yet because it's going to be a different product than the mobile version (probably closer to the mobile 3060) and NV is focusing on filling their mobile stack at the moment.
 
Last edited:
There's certainly going to be a desktop 3050Ti

Keep in mind that with this GPU generation, NV (and AMD) are back on their BS of the mobile chips not actually being the same config (or even the same die) as the desktop parts with the same name. So, I'm confident there will be a 3050Ti for desktop, but it hasn't come up yet because it's going to be a different product than the mobile version (probably closer to the mobile 3060) and NV is focusing on filling their mobile stack at the moment.

It makes no sense because higher core count GPUs actually get more performance/watt than cut-down versions. It's insulting to people who actually know the way this stuff works.
 
I looked at the Asus and MSI web sites to check out what they have in the RTX 3050. None are listed so I'm a bit confused as to some comments here.

On the other hand, Bitcoin has plunged lately from over $63K to over $36K today. It was under $10K less than a year ago. I imagine that will cause some miners to limp around if it went that low again.
 
I looked at the Asus and MSI web sites to check out what they have in the RTX 3050. None are listed so I'm a bit confused as to some comments here.

On the other hand, Bitcoin has plunged lately from over $63K to over $36K today. It was under $10K less than a year ago. I imagine that will cause some miners to limp around if it went that low again.
they won't be listed yet since the 3050 on desktop hasn't yet been officially announced. We're all operating on info from the leak/rumor sites, which can be pretty accurate if u know what to trust, but won't match up with "official" information until NV makes the announcement.
 
It makes no sense because higher core count GPUs actually get more performance/watt than cut-down versions. It's insulting to people who actually know the way this stuff works.
So, my post was actually not totally accurate... I rechecked the specs (since Ampere mobile details aren't quite seared into my brain yet like previous gens) and it's only the 3080 that's a bamboozle. The rest actually have a slightly higher core count than the desktop variants which makes sense with the "wide and slow for efficiency" thing like you pointed out. I'm guessing that GA102 is just so power hungry that the frequency would have to be pushed so low as to f*ck up the efficiency curve- taking a ~220W GPU down to ~100-165W is one thing, but going from 320W to 100-165W is a pretty huge jump and IME there's a point of massively diminishing returns when power limit goes too low. Die size may play into that a bit too-TU104 was physically the largest chip to ever go into regular laptops and that was already enormous at 545mm^2- but GA102 is also not that much larger than TU104 (compared to TU102 at least) so it's hard to say how much of a factor that was. It's certainly less expensive for NV to use a 392mm^2 die instead of a 628mm^2 one.

AMD looks to be doing a similar thing, with the 6700M being a very modestly cut-down 6700XT and pretty much the same as we expect from the desktop 6700- but the 6800M is only a small jump up from that and "merely" the same config as a desktop 6700XT.
 
Last edited:
It makes no sense because higher core count GPUs actually get more performance/watt than cut-down versions. It's insulting to people who actually know the way this stuff works.

The RTX 3060 notebook part uses the uncut cores. The reason the 3070 uses more cut cores is because they have already hit the power wall with 3060 at 120w (So 3070 at 150w is nuts)

You can't use much magic to improve performance (when you take into account the 15% difference in core count you would be limited to like 5% improved performance when already power-limited at 150w)
 
The RTX 3060 notebook part uses the uncut cores. The reason the 3070 uses more cut cores is because they have already hit the power wall with 3060 at 120w (So 3070 at 150w is nuts)

You can't use much magic to improve performance (when you take into account the 15% difference in core count you would be limited to like 5% improved performance when already power-limited at 150w)
oops, yeah yr right the 3070 is cut too. Geez I can't keep this straight... at least previous generations were consistently cut or not cut across the entire stack (the "special" 980 version being the only exception IIRC) but Ampere is all over the place.
 
Last edited:
So while all this discussion is interesting, we have no way of knowing when we can actually buy one of these models (assuming that's what you want).
 
So while all this discussion is interesting, we have no way of knowing when we can actually buy one of these models (assuming that's what you want).
Looks like they're doing laptops first. That makes sense to me given the current environment. Miners are less likely to snap up all the low end GPUs if they have to buy a whole laptop to get one, plus a GPU like a 3050 makes more sense to me in a laptop. I'm mostly a desktop guy so when I buy a laptop I want it portable. Less power draw & less heat -> more portable. I don't want a brick that comes with a folding cart so I can game for an hour or two on a laptop with a GPU that pulls 150W.

Personally I think the 3080 using the "big" chip screwed up the stack for laptops. The last time NV had an "80" card that wasn't a Ti with a 384-bit bus was the 780. The 700 series was mostly rebadged 600 series cards except for the 780. 680->770, 670->760, etc. but the 780 used the big chip with a 384-bit bus. This time around they want a 3080 laptop chip but the desktop 3080 is using the big chip so laptops get an uncut version of the chip in the desktop 3070 as the laptop 3080 and it all cascades from there. The laptop 3070 has fewer cores than the desktop version because they wanted to leave some space between the laptop 3080 and the 3070.
 
Looks like they're doing laptops first. That makes sense to me given the current environment. Miners are less likely to snap up all the low end GPUs if they have to buy a whole laptop to get one, plus a GPU like a 3050 makes more sense to me in a laptop. I'm mostly a desktop guy so when I buy a laptop I want it portable. Less power draw & less heat -> more portable. I don't want a brick that comes with a folding cart so I can game for an hour or two on a laptop with a GPU that pulls 150W.

Personally I think the 3080 using the "big" chip screwed up the stack for laptops. The last time NV had an "80" card that wasn't a Ti with a 384-bit bus was the 780. The 700 series was mostly rebadged 600 series cards except for the 780. 680->770, 670->760, etc. but the 780 used the big chip with a 384-bit bus. This time around they want a 3080 laptop chip but the desktop 3080 is using the big chip so laptops get an uncut version of the chip in the desktop 3070 as the laptop 3080 and it all cascades from there. The laptop 3070 has fewer cores than the desktop version because they wanted to leave some space between the laptop 3080 and the 3070.
Excuse the noob question. What is a "big" chip? An "uncut" chip? I know what is the bus and cores.
 
oops, yeah yr right the 3070 is cut too. Geez I can't keep this straight... at least previous generations were consistently cut or not cut across the entire stack (the "special" 980 version being the only exception IIRC) but Ampere is all over the place.
Its a lot easier to avoid the power limits when your full-speed desktop parts already top-out at 150w (then binning plus 10% down-clock gets your 970 down to 100-120W!

The 3070 is a 225w monster, being cut down massively to 100-125W The 3060 at up to 115w is a lot loser to it's native desktop tdp of 180
 
Last edited:
Excuse the noob question. What is a "big" chip? An "uncut" chip? I know what is the bus and cores.
Just my description of the biggest chip NV comes out with in a given generation that actually ends up on a graphics card. So the 384-bit bus chip, typically cut down a bit for anything that isn't a Tesla card or Titan (aka xx80Ti), though in this latest generation we got a 3090 instead of a Titan.
 
Just my description of the biggest chip NV comes out with in a given generation that actually ends up on a graphics card. So the 384-bit bus chip, typically cut down a bit for anything that isn't a Tesla card or Titan (aka xx80Ti), though in this latest generation we got a 3090 instead of a Titan.
So for the 256 or 192 bit bus chips, is the physical die size the same or smaller? If the same, how is the chip "cut down?"
 
So for the 256 or 192 bit bus chips, is the physical die size the same or smaller? If the same, how is the chip "cut down?"
Cut down means parts of the chip are disabled. Like a desktop 3080 and 3090 use the same chip, but some parts of the chip are disabled for 3080s. The 3090 has 10496 Cuda cores and a 384-bit memory bus. The 3080 has the same number of physical cores and the chip could support 384-bit memory if it has no defects, but they disable some of the cores and 1/6th of the memory bus for the 3080 so it ends up with 8704 cores and a 320-bit memory bus.

The chips with smaller bus widths have a smaller die size and fewer cores than chips with a bigger bus width. You could reasonably call that "cut down", but the usual usage of the term at least on [H] is turning parts of a chip off rather than physically removing them.
 
So for the 256 or 192 bit bus chips, is the physical die size the same or smaller? If the same, how is the chip "cut down?"
edit: oops ninja"d

The physical die size is always the same on the "cut" chips, it's an internal thing with (could be off here) fusable resistors that they can selectively blow at the factory to cut off access to whatever Shader Modules/ROPs/TMUs/RT Cores/Tensor Cores they want to disable for a certain bin. So with the the Mobile 3070 for example, the die is the same as the desktop 3070 but some of the internals are electrically disabled. Also the mobile chips are selected to be the ones that clock the highest at the lowest voltage but that's a binning thing, the chips come from the same batches and are selected out depending on defects and performance.
 
Last edited:
  • Like
Reactions: x509
like this
AMD used to sell 3 core chips. I think they were 4 cores with 1 defective core disabled. Is is possible some of these chips have a defective part? Do some people ever try to enable the disabled cores?
 
AMD used to sell 3 core chips. I think they were 4 cores with 1 defective core disabled. Is is possible some of these chips have a defective part? Do some people ever try to enable the disabled cores?
It's likely that there are defects on the lower binned dies, although sometimes fully-functional dies are binned low to fill demand for lower-end parts (dunno how much that's happening with Ampere)

Re-enabling shaders on lower binned chips has been done in the past, but I don't think I've heard of it being a thing since the Radeon HD 6000 days like 10+ years ago. AFAIK these days cores are disabled on a physical level by selectively blowing traces inside the die which would take some pretty intense equipment to repair, if it could be done at all without peeling the layers of the die apart which is going into sci-fi territory.
 
I haven't heard of anything about re-enabling disabled parts of chips in a long time either. Back in the day those 3 core Athon 64 X3 chips wandplus was talking about could have the 4th core re-enabled. I never had one personally but hacks to turn it back on were out there and plenty of people on [H] tried it. Results were mixed. If you were lucky you got a fully functional chip, but often the 4th core didn't work at all or wouldn't work well enough to want keep it enabled. Maybe it could work but it killed your clock speed.
 
I haven't heard of anything about re-enabling disabled parts of chips in a long time either. Back in the day those 3 core Athon 64 X3 chips wandplus was talking about could have the 4th core re-enabled. I never had one personally but hacks to turn it back on were out there and plenty of people on [H] tried it. Results were mixed. If you were lucky you got a fully functional chip, but often the 4th core didn't work at all or wouldn't work well enough to want keep it enabled. Maybe it could work but it killed your clock speed.
It only happens, and may happen again when the competition is in the hole. There are no majik Radeons or Ryzens because they don't need to atm. Simple as that.
 
Back
Top