Vega 2 is only 20% faster. It's not coming to consumers at that price and perf - industry only, the AMD hype train is going to derail again with these 4080Ti performance at 5W for $55 type bullshit predictions. If they do truly have a hit with navi it certainly won't be priced $250. Why would they? Even Polaris was over MSRP for ages pre-mining.
Agreed that even if AMD produce a card that has performance that betters, or even matches, Nvidia's best, they won't be selling it for peanuts because that's business suicide. They might undercut Nvidia pricing marginally of they deliver an equivalent product, but they aren't going to sell 2080Ti-beating cards at a fraction of the price when they can sell them just as well at a higher price point.

And that's before we get to the issue of availability and retailer gouging.
 
Vega 2 is only 20% faster. It's not coming to consumers at that price and perf - industry only, the AMD hype train is going to derail again with these 4080Ti performance at 5W for $55 type bullshit predictions. If they do truly have a hit with navi it certainly won't be priced $250. Why would they? Even Polaris was over MSRP for ages pre-mining.
Because Navi supposedly is very small.
Navi would serve 2 purposes at $250 that is making sure that surplus stock from Nvidia can not sell and current RTX low end models are worthless in price.
When competing in price with Nvidia people will choose the obvious.

The low price will hurt Nvidia more then AMD and you can sell a lot of those $250 cards because no one would say that you can get better performance per watt for the RX Navi 3080 or price performance.

Currently while Nvidia is still doing their 12nm cards this is the only time this can happen as soon as Nvidia start producing their 7nm gpu it is over...
 
People aren’t buying AMD because they’re not releasing anything to compete with NVIDIA and haven’t for years. I’m sure many would jump ship if AMD had anything of comparable performance to offer, but they don’t. People aren’t going to buy underperforming cards for sympathy’s sake. That’s ridiculous and you can’t fault people for AMD’s lack of competitive GPUs. If anyone is digging AMD’s grave it’s AMD. They’re doing a LOT better with their Ryzen series CPUs, so I hope their GPUs catch up as well. Someone needs to knock the green giant on its ass. I have hope that this is their first step in the right direction, but I’m cautiously optimistic. AMD makes a lot of promises that often go undelivered.

Please read what I have posted again. My posty is not as much as what AMD has offered in the past, but what they could offer now. The video is about what is coming. I posted in reaction to you. You said that basically after AMD releases a competitive part, you will will wait for nVidia (and Intel) to respond. So you clearly did not respond to what AMD offered in the past, but what is to come. Now you dig into AMD's failures in the past, which I do not even contest. AMD was a weakling. Do you even know why they were? Please go and check Intel and nVidia's anti competitive behaviour, especially Intel's. AdoredTV had great videos about it.

You reward these companies for their behaviour. You are just as guilty. The sharing parties are just as guilty as the stealing party.

But you will ignore this, most will probably ignore this and maybe we'll be back with the same situation again. This is dark side of capitalism, monopolies, even if it is still better than socialism. Even with a superior product, AMD is struggling to gain in the market against Intel. The only hope AMD has, is that Intel has made such a big mess that AMD might just survive. There is another hope, those who sows evil, will eventually reap what they have sown.
 
Please read what I have posted again. My posty is not as much as what AMD has offered in the past, but what they could offer now. The video is about what is coming. I posted in reaction to you. You said that basically after AMD releases a competitive part, you will will wait for nVidia (and Intel) to respond. So you clearly did not respond to what AMD offered in the past, but what is to come. Now you dig into AMD's failures in the past, which I do not even contest. AMD was a weakling. Do you even know why they were? Please go and check Intel and nVidia's anti competitive behaviour, especially Intel's. AdoredTV had great videos about it.

You reward these companies for their behaviour. You are just as guilty. The sharing parties are just as guilty as the stealing party.

But you will ignore this, most will probably ignore this and maybe we'll be back with the same situation again. This is dark side of capitalism, monopolies, even if it is still better than socialism. Even with a superior product, AMD is struggling to gain in the market against Intel. The only hope AMD has, is that Intel has made such a big mess that AMD might just survive. There is another hope, those who sows evil, will eventually reap what they have sown.
I also don't get why some just go like: well amd doesn't have 2080ti equivalent, so they suck all the way.. they are bullshit, so for my next computer I am buying a 1030 ddr4!!, Its Nvidia and they make they best ones I will never buy!... Yes i get its the halo effect, but man doesn't mean everything is crap,.. amd has great products, at the proper price/ perf ratio . Yes, just not the magic halo.
 
Last edited:
I agree, a straight up vega shrink, with the same pipelines, and whatever, even 20% sounds optimistic. They have few choices with such product: drop prices in the old gen to burn inventory, sell the new improvement (15, maybe 20% optimistic) of 7nm at similar prices as the old. Only if they upgraded the chip in other ways they would have a halo type product... Don't know if i make any sense.
20% is the only figure we have from AMD itself regarding perf on the Vega2. Hence with HBM, only 20% perf and the cost to produce it (7nm more expensive) it's not coming to consumer. It's not worth it outside of compute use.
Navi sure will, but I don't expect V64+15% for $250. Nvidia sells that currently for around $800 USD, yet the AMD hype train thinks AMD will sell it (without lame duck RT) for $250? GTFO

AMD hype has made many people forget about simple logic.
 
Because Navi supposedly is very small.
Navi would serve 2 purposes at $250 that is making sure that surplus stock from Nvidia can not sell and current RTX low end models are worthless in price.
When competing in price with Nvidia people will choose the obvious.

The low price will hurt Nvidia more then AMD and you can sell a lot of those $250 cards because no one would say that you can get better performance per watt for the RX Navi 3080 or price performance.

Currently while Nvidia is still doing their 12nm cards this is the only time this can happen as soon as Nvidia start producing their 7nm gpu it is over...

The small Navi yes easily will be around or under that. It's the bigger Navi that many are predicting to be the $250 wonder card. AMD will not price 250 out of the gate on anything that will be 2050-2060 territory. Nvidia already gouged that up to 300-400 range.
But you're right that AMD could be doing that to really hurt that leftover nvidia stock channel.. Investors are already shitty about it so this would just be a perfectly compounded storm. Will be some great fireworks this year in the GPU trenches!
 
There is no need for tracing add in cards. There is no such thing as a ray tracing core.

Better tell NVIDIA to revoke their turing white-paper then, as it states 2 RT cores per SM...
 
No sure I get what you're saying... Yes its a gpu with tensor cores, but then you say you can have one without.. so it CAN be separated? I think add in cards make the most sense. The point of add on is for backwards compatibility when you have an older card things like that... Or that you can add it later. The way I see it, nVidias architecture yes is superior in power consumption, hence efficiency, but that is to a degree... They have also gone the HUGE chips route, with corresponding HUGE prices. I don't see why AMD could go 'more huge' with vega 2 in 7nm just to push what would pretty much be a a talking piece like nVidias 2000 buck or 3000 buck whatever whatever (its a very low volume piece probably written off as marketing). Now, I don't think they will do it, but I think as an engineering challenge, i guess the could.
If navi is a small chip that can work in chiplet configuration, AMD will have a clearly superior architecture then, and I think they should exploit it the most, and that would mean also ray tracing add in cards, and non ray tracing gpus (obviously), and surely a configuration with both chiplets (gpu and rt chiplets) yields and all that will probably be better all around, as opposed to the HUGE, configuration nvidia is working with right now Which limits their pricing strategy, as well as options to offer potentially.

Well add in cards make zero sense from a pro card AI stand point. Big AI servers aren't going to add yet another card drawing yet more power and adding yet more cost. So add ins have one market... and completely destroy a lucrative AI research mid market.

What we all need to realize is no GPU manufacturer is ever going to be designing cards for just gaming ever again. Tensor cores where not added by NV to the 2000 series for gaming. The Turing silicon simply already has them. Volta was the in between and they didn't even bother making a gaming part out of it. Turing has replaced volta outside of all but their arm SOC... and I'm sure at some point NV will change that line to ARM+Turing as well.

AMD is rumored to potentially have 2 different cards in teh works. Vega 2 and Navi. Its possible as some suggest Vega2 is just some type of comptute only chip. That doesn't make much sense to me... why copyright a Vega 2 logo for compute cards. But who knows could be.

Navi is going to be their longer term new Arch... so I would think there is no doubt it will have tensor cores. AMDs pro cards are going to need tensors for non gaming markets. Its also possible that the console makers are busy working on the software side of things to bring tensor use to games in a real way in the end of '19 early '20 time frame. Making the early NV 2080 Tensor cores look even more underpowered.

As for pricing ya we'll have to see. Pricing is exactly why add in cards won't work. Who is going to buy a tensor add in card when there is exactly one shipping game that can use it. The only people that would buy it would be AI researchers... so if it was priced to sell to gamers it would completely destroy the potential high end tensor market. Granted AMD isn't in that game and perhaps as a F U to NV that would be midely funny... but AMD share holders wold be rightly pissed that AMD basically destroyed a lucrative market instead of trying to compete in it. Like it or not for gamers... Compute cards with tensor flow bits sell very expensive server cards, add ins make zero sense from that stand point, and they wouldn't sell to gamers anyway. As it is NV has the issue of people saying no to 2000 cards based on pricing.
 
Well add in cards make zero sense from a pro card AI stand point. Big AI servers aren't going to add yet another card drawing yet more power and adding yet more cost. So add ins have one market... and completely destroy a lucrative AI research mid market.

What we all need to realize is no GPU manufacturer is ever going to be designing cards for just gaming ever again. Tensor cores where not added by NV to the 2000 series for gaming. The Turing silicon simply already has them. Volta was the in between and they didn't even bother making a gaming part out of it. Turing has replaced volta outside of all but their arm SOC... and I'm sure at some point NV will change that line to ARM+Turing as well.

AMD is rumored to potentially have 2 different cards in teh works. Vega 2 and Navi. Its possible as some suggest Vega2 is just some type of comptute only chip. That doesn't make much sense to me... why copyright a Vega 2 logo for compute cards. But who knows could be.

Navi is going to be their longer term new Arch... so I would think there is no doubt it will have tensor cores. AMDs pro cards are going to need tensors for non gaming markets. Its also possible that the console makers are busy working on the software side of things to bring tensor use to games in a real way in the end of '19 early '20 time frame. Making the early 2000 Tensor cores look even more underpowered.

As for pricing ya we'll have to see. Pricing is exactly why add in cards won't work. Who is going to buy a tensor add in card when there is exactly one shipping game that can use it. The only people that would buy it would be AI researchers... so if it was priced to sell to gamers it would completely destroy the potential high end tensor market. Granted AMD isn't in that game and perhaps as a F U to NV that would be midely funny... but AMD share holders wold be rightly pissed that AMD basically destroyed a lucrative market instead of trying to compete in it. Like it or not for gamers... Compute cards with tensor flow bits sell very expensive server cards, add ins make zero sense from that stand point, and they wouldn't sell to gamers anyway. As it is NV has the issue of people saying no to 2000 cards based on pricing.
Oh yeah I agree 100% on the pro stuff... However if this new navi/chiplet setup is modular enough, it might make sense to make RT only add in cards for gaming.. assuming there is no too complex issues with software/drivers its a no brainier to me and I hope they do it... So I guess it comes down to how modular the design will be.
Actually if its modular enough, for the pro area they could have different setups, more ai cores or more gpu cores and so on, just different mixes... Could be very cool.
 
Oh yeah I agree 100% on the pro stuff... However if this new navi/chiplet setup is modular enough, it might make sense to make RT only add in cards for gaming.. assuming there is no too complex issues with software/drivers its a no brainier to me and I hope they do it... So I guess it comes down to how modular the design will be.
Actually if its modular enough, for the pro area they could have different setups, more ai cores or more gpu cores and so on, just different mixes... Could be very cool.

Well it sounds like if they want more Tensor... they just add another Tensor chip to the chip package. Think of chipletts as easy to assemble SOC packages. Want more CPU cores... add another CPU CORE. Zen 2 is supposed to remove a bunch of stuff from the CPU package like memory controllers. The 7nm part is actually much smaller and less complex which is likely why the leaks all say they are seeing extremely good Fab results, they have simplified the CPU design a lot. This is possible because they are making 14nm controller mini chips for the zen2 package which will house the memory controller and who knows what else right now.

That is why I was saying I'm sort of excited to see what they do with the future Zen+Navi APUs. Its possible this next batch of APU is in fact just an update of the 2200 / 2400gs with vega gpus still. But future APUs employeing chiplet tech are much much more exciting. For lots of reasons. One being they could as you suggest just bolt on extra ASIC type hardware such as tensor flow stuff. They can also built a second memory controller into the controller chip... for more memory bandwidth (although that may require a new motherboard) they can also basically drop full featured Navi chips onto the same package if the silicon fits... and at 7nm that isn't impossible. This is why the PS5 and next gen xbox are a bit exciting imo. If the rumors are true basically AMD is talking about making Zen2 a custom SOC type system with a basically threadripper mini design. More CPUs rand oom for full non stripped GPU cores. We'll see I guess CES isn't far away.
 
I mean, Vega 20 performance increase could be pretty much anywhere. A straight shrink with double the bandwidth and a bit higher clocks likely doesn't get you all that much. But also double the ROPs to go along with that and you probably get some substantial benefits at 4K, perhaps even beating Nvidia at stupidly high resolutions like triple head 5K. Fix the DSBR and other broken stuff from 14nm Vega, assuming those are even salvageable, and who knows how much that would help.

But obviously Vega 20 isn't trading blows with the 2080 Ti or AMD wouldn't be so coy about releasing it to consumers. And if the rumour of "between 2080 and 2080 Ti" is true, then I expect that on average Vega 20 is going to end up closer to the former than the later for the same reason.
 
Please read what I have posted again. My posty is not as much as what AMD has offered in the past, but what they could offer now. The video is about what is coming. I posted in reaction to you. You said that basically after AMD releases a competitive part, you will will wait for nVidia (and Intel) to respond. So you clearly did not respond to what AMD offered in the past, but what is to come. Now you dig into AMD's failures in the past, which I do not even contest. AMD was a weakling. Do you even know why they were? Please go and check Intel and nVidia's anti competitive behaviour, especially Intel's. AdoredTV had great videos about it.

You reward these companies for their behaviour. You are just as guilty. The sharing parties are just as guilty as the stealing party.

But you will ignore this, most will probably ignore this and maybe we'll be back with the same situation again. This is dark side of capitalism, monopolies, even if it is still better than socialism. Even with a superior product, AMD is struggling to gain in the market against Intel. The only hope AMD has, is that Intel has made such a big mess that AMD might just survive. There is another hope, those who sows evil, will eventually reap what they have sown.
WTF are talking about? You must be thinking you quoted someone else. I didn’t say anything that you just said. I said if AMD releases something comparable, I’m going with AMD immediately. How you misconstrued that as anything else is beyond me. I’m not waiting on Intel and NVIDIA. I’m waiting on AMD. This year might be AMD’s comeback year for GPUs. My current NVIDIA card is actually the first I’ve ever owned. I’m rooting for big red and always have. I switched to NVIDIA two years ago after the initial 10x release because AMD made a lot of claims during that period but didn’t deliver. My current rig is using a Ryzen CPU and an NVIDIA GPU. I’m definitely not the reason for AMD’s failings. AMD is. And lol ... reap what we sow? That is ridiculous, but I want what you’re smoking. Calling people evil because they’re purchasing a clearly better performing product, which is probably the most common thing in the world, is silly. Your idea of what’s “evil” is seriously misguided.
 
ChadD
Do you have ANY data to support you claim that "RT cores are just Tensor cores" Yes/No?

I say (like stated in the whitepaper and the fact you can only get a hold of the RT cores via CUDA) that they are dedicated fixed function hardware units (and not tensor cores, they are used for denoising the RT image, not rendering the RTRT ).
I have seen you post the claim several times, but you have never given any documentation in support?
 
Hoping the rumors 1070 performance and 150 tdp at $250 are true. It'll be an instant buy for me.
Just by shrinking a Vega 56 to 7nm and changing the memory to DDR6, you can have a winner over 1070 with 150-160W. So, if Navi arch is designed mostly for gaming instead of the Vega's all-round arch, we could easily see a 1080 lvl of performance from a 160W Navi GPU.

And Vega2 could reach 2080 performance but witl close to 300W of consumption. Profitable for AMD only if sold as a new FE.
 
ChadD
Do you have ANY data to support you claim that "RT cores are just Tensor cores" Yes/No?

I say (like stated in the whitepaper and the fact you can only get a hold of the RT cores via CUDA) that they are dedicated fixed function hardware units (and not tensor cores, they are used for denoising the RT image, not rendering the RTRT ).
I have seen you post the claim several times, but you have never given any documentation in support?

You mean NVIDIA saying yes tensor cores do Ray tracing gotcha. No why would they break message.

I'm simply not an idiot. I understand how ray tracing works. I understand what tensor flow is.

I also know how Nvidia has marketed Turing to AI folks.

Tensor flow is a Google API. It basically allows people to preform matrix calculations in hardware.

Nvidia has improved on the googles design for Turing allowing each matrix block to be split into smaller math units. Basically this is what is happening when they talk about Tensor flow matrix with INT16 INT8 and INT4 precision. Basically the unit is being split up into multiples more matrix box containers each holding less data. There are a lot of AI uses for being able to perform 2-4x more math in hardware with reduced precision. It also happens to be exactly what you would want to calculate rays. You don't need 64 bit address spaces to define the rays calculated for an light source. Instead you need a lot of low precision ones. A simple system of ray travels ray intersects / bounces or it doesn't you can define that with very little address space.

https://developer.nvidia.com/rtx/raytracing/dxr/DX12-Raytracing-tutorial-Part-1
Slide down to part 8 "Acceleration Structure"
"To be efficient, raytracing requires putting the geometry in an acceleration structure (AS) that will reduce the number of ray-triangle intersection tests during rendering. In DXR this structure is divided into a two-level tree. Intuitively, this can directly map to the notion of an object in a scene graph, where the internal nodes of the graph have been collapsed into a single transform matrix for each bottom-level AS objects. Those BLAS then hold the actual vertex data of each object. However, it is also possible to combine multiple objects within a single bottom-level AS: for that, a single BLAS can be built from multiple vertex buffers, each with its own transform matrix."

Go throgh and read the developer info for DX ray tracing and it will be very clear to you. Remember that DX is a high level programming language it isn't describing what the Nvidia driver is doing with stuff at a hardware level. Having said that seeing how the API asks developers to define their math representations its clear they are using Matrix math.

If your not familier with exactly what a "tensor" is. The easiest way to describe it might be this way;
A tensor is a mathematical entity that lives in a structure and interacts with other mathematical entities. If one transforms the other entities in the structure in a regular way, then the tensor must obey a related transformation rule.

https://en.wikipedia.org/wiki/Tensor

NV tracing cores is just marketing speak. Keep in mind I'm also not saying they are junk. Tensor cores will be fairly standard going forward. However NV was the first company to ship them to both researchers and gamers both. Kudos to them on that, its a google thing but they are the guys delivering hardware today. When it comes to turing the "RT" cores are an exact multiple of the number of actual Tensor cores. What ever they list as the RT core count... multiply by 8 and you have the tensor core count.

As for a NV admits it type post for you closes you will get is Nvidias optix
https://developer.nvidia.com/optix
It is a tool that predates Turing.... and leverages Volta to perform RTRT

Key features;
  • AI Accelerated rendering
 
Back
Top