Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Rumors are that the top GPU for RTX 5000, is targeting about double performance of a 4090.Ampere (RTX 3000) was a really good architecture held back by the Samsung 8N process. The reason we saw such an amazing uplift going from the 3000 to 4000 GPUs was mainly due to the process improvement. It allowed Nvidia to push their products much further without incurring the power penalty imposed by Samsung 8N.
Unfortunately, I doubt we will see such an uplift again with the 5000 GPUs.
I think there's still some fuel left in the tank. The 4090 got a 0% increase in memory bandwidth over the 3090 Ti so who knows maybe GDDR7 with a huge uplift in memory bandwidth can do a good amount of the heavy lifting for performance gains next gen. Let's not forget that Nvidia also managed to pull some decent gains going from Kepler to Maxwell which was made on the same TSMC 28nm node. So if they could achieve what they achieved with Maxwell which had no node advantage over Kepler, then I'm sure going to 3nm they definitely have some room to work with.
That should be enough to play Immortals of Aveum at 4K, without need for help from any upscalingRumors are that the top GPU for RTX 5000, is targeting about double performance of a 4090.
That should be enough to play Immortals of Aveum at 4K, without need for help from any upscaling
If they feel like they can continue to make those margin on AI cards and that there any possibility that the gaming card compete with themselve, it could be timid and not that much of a factor specially for training.The next generation of cards, especially Nvidia, are gonna be AI first and graphics as an afterthought.
If they feel like they can continue to make those margin on AI cards and that there any possibility that the gaming card compete with themselve, it could be timid and not that much of a factor specially for training.
It's gonna be for consumer level AI. The hobby market for Stable Diffusion, LLamaGTP, AudioCraft, etc. is gonna grow exponentially. The amount of open source stuff available just on the image side of stuff is incredible given how short of a period of time that AI has existed.Moreso if the lineup continue to diverge more and more, trhe 3090 was quite far from an a100 for AI stuff, same for the 4090 and H100 we can expect the same with again the 5090/RTX family be a graphic/non ai datacenter-rendering farm first type of product with the ML training on the hopper next product line, would be putting those 1000% type of margin in faster geopardy, they will crash down from competition but if they can help it not Nvidia competition would be my guess.
If they can split inference with training, but I extremely doubt that most of the die will be AI in mind over graphics (that will be an afterthought) on the consumer cards.It's gonna be for consumer level AI. The hobby market for Stable Diffusion, LLamaGTP, AudioCraft, etc. is gonna grow exponentially. The amount of open source stuff available just on the image side of stuff is incredible given how short of a period of time that AI has existed.
https://civitai.com/
View: https://www.youtube.com/watch?v=IPSB_BKd9Dg
MLID confirming what I suspected AMD is planning on doing (along with plenty of others). Chiefly, moving their "mid-range" up a tier in performance and dropping support for entry/mid-range graphics in favor of APUs with on-board graphics.
https://www.guru3d.com/news-story/l...-point-apu-with-16-rdna3-5-compute-units.html
Strix Point, which is probably going to be the de-facto laptop chip for damn near everything will have 16 CUs while we already know Strix Halo will have 40. 16 CUs would roughly be a 8500-series graphics, while 40 would put it at 8700-series level. Which means that discrete graphics will probably start around 8700 and go up from there since there is literally no point in producing discrete graphics that's weaker than what an APU can bring, unless they spin up some crazy low-power discrete graphics for laptops running some kind of minimum graphics APU by the side, but that seems unlikely to me.
Moving the mid range up? Does that mean making an 8700XT but then saying it's actually the 8900XT in marketing material?
No, I'm thinking their "mid-range" cards will perform higher than what their codenames typically target.
So codename 42 and 43 will start at least at the X700 level and go up from there. What they name them, who knows, maybe they might even bump them down, who knows.
Imagine if a 600-series AMD product punches at a 70-series Nvidia level, it would be phenomenal marketing. (AMD I know you're in here, think about this seriously.)
Hawk Point: 12 CU 8300M
Strix point: 16 CU 8400M
Strix Halo: 40 CU 8600M
N43 (cut down): 40? CU 8600XT
N43 (full die): 40? CU 8700
N42 (cut down): 60? CU 8700XT
N42 (full die): 60? CU 8800
N41 (cut down) ?? CU 8800XT
N41 (full die(s)) ?? CU 8900
Golden samples ?? CU 8900XT (XTX for big RAM edition)
I know that's not how the naming scheme works now, but if you drop everything one tier, assuming you can fill out the stack, and price things competitively, your lower tier name will take out a higher tier Nvidia car.
N41 and N42 are the completely cancelled configurations no? So by that extension there shouldn't be an 8800 or higher class GPU at all next gen. If N43 really maxes out at 40 CU then it really is just a mid range 8700(XT). There would be no need to do any shuffling around or renaming or whatever, just release it as a the 8700 or 8700 XT for ~$450 and call it a day.
Yeah, sorry, brain fart, I was thinking about RDNA 5 and using RDNA 4 terms.
Edit: Yep, nope, the naming thing won't work, I don't think, they don't have the stack for it. Maybe for RDNA 5, but not 4.
The other thing that's confusing me is this APU that supposedly will have 40 CUs. If such a thing exists, why even make a mid range 8700XT at all if it's going to max out at 40 CUs as well? How much faster would 40 CUs in a dGPU with it's own GDDR7 and higher TDP limits be Vs. 40 CUs found in an APU? And such an APU surely cannot be cheap which wouldn't appeal to it's budget constrained target audience.
I picked 40 because it's at the low number of CUs they'd have to have for a discrete graphics card to make sense. Maybe with a few less they could hit higher clocks, and with v-cache and on-board RAM they can match or beat Strix Halo.
Strix Halo has been confirmed, something would have to have gone very wrong if the 40 CU APU doesn't go into production. AMD has said they don't have any plans to make low-power discrete GPUs with this gen going forward.
I think it'll be 36/40 and 56/60, and I fully expect desktop APUs to go up to 40 CUs as well, if not higher, given Granite Ridge's reported 170W upper limit.
AMD's initial plan for N43/N44 is 64/32I think it'll be 36/40 and 56/60, and I fully expect desktop APUs to go up to 40 CUs as well, if not higher, given Granite Ridge's reported 170W upper limit.
AMD's initial plan for N43/N44 is 64/32
That makes
n43 as a monolithic n32 (on an advanced node?)
&n44 as a n33 on an advanced node
I expect n43 to have 2 cards
($500) 8700xt = 4070 ti
($400) 8700 = 4070 / 7800xt
N33 could be
($300-$330) 8600xt = 4060 ti / 7700xt
MLID sourcesWhere did you hear that it's going to be 64 CU for N43?
I don't think, they don't have the stack for it. Maybe for RDNA 5, but not 4.
Shortages of a key chip packaging technology are constraining the supply of some processors, Taiwan Semiconductor Manufacturing Co. Ltd. chair Mark Liu has revealed.
Liu made the remarks during a Wednesday interview with Nikkei Asia on the sidelines of SEMICON Taiwan, a chip industry event. The executive said that the supply shortage will likely take 18 months to resolve.
Historically, processors were implemented as a single piece of silicon. Today, many of the most advanced chips on the market comprise not one but multiple semiconductor dies that are manufactured separately and linked together later. One of the technologies most commonly used to link dies together is known as CoWoS.
https://siliconangle.com/2023/09/08/tsmc-says-chip-packaging-shortage-constraining-processor-supply/
TSMC reportedly intends to expand its CoWoS capacity from 8,000 wafers per month today to 11,000 wafers per month by the end of the year, and then to around 20,000 by the end of 2024.
TSMC currently has the capacity to process roughly 8,000 CoWoS wafers every month. Between them, Nvidia and AMD utilize about 70% to 80% of this capacity, making them the dominant users of this technology. Following them, Broadcom emerges as the third largest user, accounting for about 10% of the available CoWoS wafer processing capacity. The remaining capacity is distributed between 20 other fabless chip designers.
Nvidia uses CoWoS for its highly successful A100, A30, A800, H100, and H800 compute GPUs.
AMD's Instinct MI100, Instinct MI200/MI200/MI250X, and the upcoming Instinct MI300 also use CoWoS.
https://www.tomshardware.com/news/amd-and-nvidia-gpus-consume-lions-share-of-tsmc-cowos-capacity
Taiwan Semiconductor Manufacturing Co. Chairman Mark Liu said the squeeze on AI chip supplies is "temporary" and could be alleviated by the end of 2024.
https://asia.nikkei.com/Business/Te...-AI-chip-output-constraints-lasting-1.5-years
Liu revealed that demand for CoWoS surged unexpectedly earlier this year, tripling year-over-year and leading to the current supply constraints. The company expects its CoWoS capacity to double by the end of 2024.
https://ca.investing.com/news/stock...-amid-cowos-capacity-constraints-93CH-3101943
Okay son, we've heard that before.Fuck no. I'll get a console at that point.
It is becoming increasingly accepted in the mainstream that AMD will go missing at the high end in 2024 (& most of 2025 too)
Nvidia also might prioritize 3nm wafers to AI cards.
What does it mean for pricing (& launch dates) of top-end blackwell cards
https://www.techradar.com/pro/gpu-p...prioritize-ai-what-could-that-mean-for-gamers
When AMD launches its RDNA 4 family of GPUs, possibly next year, there won’t be an AMD Radeon RX 8800 or 8900, according to TechSpot. This will give its rival Nvidia a clear run at manufacturing the best GPUs to meet the high-end gaming market, but could also serve to constrain supply and spike prices.
Allocating VRAM ≠ using VRAMIf the 5090 still has 24GB of VRAM would be rather disappointing, 48GB would be nice but would drive up the cost of the card. Already I can see some games using close to 20GB of VRAM and in 2025 I don't think 24GB of VRAM will cut it on higher resolutions. I don't really see NVIDIA would use a 512 bit memory bus due to it's high cost as well, if they want a 32GB configuration a 256 bit memory bus would be suitable but yet again, a 256 bit memory bus would most likely hold back on higher resolutions.
Allocating VRAM ≠ using VRAM
Most games allocate more VRAM than that actually need and very few actually need anything over 16GB. We have been over this ad nauseum on these forums. How is his still misinformation floating around here?
With 4090 mobile being basically desktop 4070Ti but a bit more vram and high end mobile CPUs pushing some insane numbers, I'm more interested in laptops for the next gen.
Especially when somethng like BigScreen Beyond and/or Visor 4k are around the corner.
In 2025, 5090 mobile will probably be a monster and compare to desktop 4090, or better. CPUs as well but honestly, performance they deliver now would be enough.
New Legion 9 with mini-led, water cooling, 5090 and 15980HX (or whatever would be top tier in 2025) would cost an arm and a leg (about €5500 in EU) but that thing would be an absolute monster.
AMD just released a Laptop Ryzen CPU with VcacheWith 4090 mobile being basically desktop 4070Ti but a bit more vram and high end mobile CPUs pushing some insane numbers, I'm more interested in laptops for the next gen.
Especially when somethng like BigScreen Beyond and/or Visor 4k are around the corner.
In 2025, 5090 mobile will probably be a monster and compare to desktop 4090, or better. CPUs as well but honestly, performance they deliver now would be enough.
New Legion 9 with mini-led, water cooling, 5090 and 15980HX (or whatever would be top tier in 2025) would cost an arm and a leg (about €5500 in EU) but that thing would be an absolute monster.