NVIDIA Ampere GPUs to feature 2x performance than the RTX 20 series, PCIe 4.0, up to 826 mm² dies and 20 GB of VRAM

I never said AMD never benefited from the mining craze. Also, from what I remember the miners were buying up AMD cards like crazy with Nvidia being second fiddle. So much so that ASRock started to make AMD GPU's specifically for mining. There was a reason why Nvidia GPU's were cheaper compared to AMD during the mining craze. Why you think the RX 480's and 580's are everywhere on Ebay? They were used for mining and now the mining market is dead.

Citation needed.

The way I see it, since AMD's GPU market share is pathetically low then selling hardware cheap to Sony and Microsoft is better than not selling anything at all. There are plenty of people who would only buy an Nvidia GPU but have no problem with a Playstation or Xbox console. In that case AMD banks.


I don't know what AMD's gameplan is and I don't think even AMD knows it. Lisa Su wants AMD to stop looking like the lower priced alternative, but they have a long way to go before they can ask for an equal price compared to Nvidia. As of right now the GPU's AMD offers are faster for a lower price, with Ray-Tracing missing. The problem is they aren't fast enough to justify the slight price decrease while not offering Ray-Tracing.
Developing purpose built customized hardware like nVidia’s Tensor cores is expensive and time consuming. AMD doesn’t have the cash flow to currently undertake that degree of research. Their loans are a noose and they need them gone before they can really start that undertaking. They need them gone fast because if they take too long and fall too far behind their cash flow will dwindle and they won’t be able to afford the research. That’s another reason why getting the consoles was such a big deal for AMD they have secured the standard for games for the next 5 years so even if they don’t have RT they have ensured that any PC game that is also getting a console launch only has it as an added feature not a core one. So it gives them time to catch up and control the flow to some degree.
 
Developing purpose built customized hardware like nVidia’s Tensor cores is expensive and time consuming. AMD doesn’t have the cash flow to currently undertake that degree of research. Their loans are a noose and they need them gone before they can really start that undertaking. They need them gone fast because if they take too long and fall too far behind their cash flow will dwindle and they won’t be able to afford the research. That’s another reason why getting the consoles was such a big deal for AMD they have secured the standard for games for the next 5 years so even if they don’t have RT they have ensured that any PC game that is also getting a console launch only has it as an added feature not a core one. So it gives them time to catch up and control the flow to some degree.

Where do you come up with this stuff. AMD debt has gone from over 2 billion in 2016 to only 486 million in 2020. While Intel has a debt of 25.8 billion, so it seems like AMD is doing just fine on debt. I mean even Nvidia is carrying about 2 billion in debt. AMD has been sinking profits back into R&D so not sure where you think they cant afford it, Ryzen has brought the income they needed.
 
Where do you come up with this stuff. AMD debt has gone from over 2 billion in 2016 to only 486 million in 2020. While Intel has a debt of 25.8 billion, so it seems like AMD is doing just fine on debt. I mean even Nvidia is carrying about 2 billion in debt. AMD has been sinking profits back into R&D so not sure where you think they cant afford it, Ryzen has brought the income they needed.
Most of that paid off in the last year, which is great. But during their decade of cost reductions they shut down most of their R&D divisions to cut costs. and yes AMD had 2B in debt same as nVidia but look at their financials, 2B in debt for AMD is statistically Much more than nVidia based on income and profits, also look at how that debt was generated, nVidia's was debt generated by pumping significantly back into their R&D so they could expand their markets, AMD's was taken on so they wouldn't shut down by people who were most interested in the potential profits they could make selling AMD off should they fail.

And yes Ryzen has brought them the profits they needed and Ryzen is great I am loving it and EPYC alike, and I am hoping to bring in some Threadrippers next fall, the success of Ryzen can be single handedly said to have saved AMD, it it was another bulldozer AMD would currently be on the chopping block with bidding wars over their various IP's with Intel sueing or lining up lawsuits over whoever bought up their x86 tech.

As for why Intel carries so much debt its because they have the cashflow to be able too, when AMD or nVidia need to generate cash to pay for things they need to sell off shares which dilutes their value as they don't have the cash reserves to be able to pay for this. Intel on the other hand has the cash flow and cash reserves to take out long term loans and instead shore up their own finances. They have some 13B cash in hand at the moment so their 29B debt is technically only 16B or so, but they have a far more flexible position than either AMD or nVidia.
 
Last edited:
Unlikely to be repeated, the reason the 970 was so cheap was because it was built on a mature 28nm process.

Wrong! The RTX 2070 has a LARGER die size than the GTX 980, and both uncut and cut parts sold for the exact same price.

Turing was built on a mature 14nm process node. Ampere should be even cheaper to build , once they get yields up.

The only reason they bumped the product name down (for the same price), is because adding RTX PLUS making it faster than Pascal bumped die space into the stratosphere!

But be assured that we can continue to pull the same trick Maxwell did: Ampere dies will start small, then their successor will go big again.
 
They are pushing the RT because that’s where things need to go. The traditional graphics pipeline is old and discoveries and algorithms have been made in the last 5 years that could be drastically better and the tech has caught up to those discoveries. By getting RT and many of these features in place and flushed out now when the pipeline is finally updated nVidia will have a mature tech to go in place not first gen. I suspect in another 2 years we will start seeing MCM consumer cards and another year or 2 after that when we see the pipeline updated to properly use it. Outside a “must have” launch title or 2 of course.
RT has always been the holy grail, since the early nineties when I first started screwing around with 3D Studio and later 3DSMax to render a complex frame with glass and mirrors and it would take days to complete - a single frame. And I'd dream of a future day it would all be happening full motion in real time.

And here we finally are on the verge of it, and yet we've got simples calling RT a conspiracy - something Nvidia haphazardly cooked up - to "milk consumers". You just want to slap them backhand them.
 
Last edited:
RT has always been the holy grail, since the early nineties when I first started screwing around with 3D Studio and later 3DSMax to render a complex frame with glass and mirror balls that would take days, and I'm dream of a day it was happening real-time.

Here we are on the verge of real-time and you've got simples calling it a fucking conspiracy to MiLk CoNsuMeRs. You just want to slap them.
RT is great, but it is the least interesting part of what the Tensor cores can do in my opinion.
 
Developing purpose built customized hardware like nVidia’s Tensor cores is expensive and time consuming. AMD doesn’t have the cash flow to currently undertake that degree of research. Their loans are a noose and they need them gone before they can really start that undertaking. They need them gone fast because if they take too long and fall too far behind their cash flow will dwindle and they won’t be able to afford the research. That’s another reason why getting the consoles was such a big deal for AMD they have secured the standard for games for the next 5 years so even if they don’t have RT they have ensured that any PC game that is also getting a console launch only has it as an added feature not a core one. So it gives them time to catch up and control the flow to some degree.
From what I've heard is that AMD and Microsoft were working together on Ray-Tracing and Nvidia jumped ahead when they heard of what they're working on. I don't remember where I heard this. You can tell that Nvidia threw their Ray-Tracing technology together when the frame rate of games dramatically drop with the feature on. Despite Nvidia's cash on hand, their Ray-Tracing technology is awful, and it shows because only a handful of games utilize it. Despite Nvidia's efforts, the sales of the RTX cards are weak compared to previous generation of GPU's, probably due to higher prices. Nvidia has spent their money on other failed ventures like self driving technology with Telsa, and their Tegra mobile chips, so it isn't like they spent all of it on GPU technology.
 
From what I've heard is that AMD and Microsoft were working together on Ray-Tracing and Nvidia jumped ahead when they heard of what they're working on. I don't remember where I heard this. You can tell that Nvidia threw their Ray-Tracing technology together when the frame rate of games dramatically drop with the feature on. Despite Nvidia's cash on hand, their Ray-Tracing technology is awful, and it shows because only a handful of games utilize it. Despite Nvidia's efforts, the sales of the RTX cards are weak compared to previous generation of GPU's, probably due to higher prices. Nvidia has spent their money on other failed ventures like self driving technology with Telsa, and their Tegra mobile chips, so it isn't like they spent all of it on GPU technology.
Yeah MS is working on bringing Ray Tracing to the DX12 spec and it is currently available in the insider preview builds for windows10 developers. NVidia has the instructions on their site how to install the tools to get it going.
https://developer.nvidia.com/rtx/raytracing/dxr/DX12-Raytracing-tutorial-Part-1
From what I understand it still uses the RT cores and the eventual AMD implementation will also require their own version of the RT cores as well. The math involved is specialized enough that using the generic shader cores can replicate it to some degree by faking or approximating some values but they don’t have the precision needed to do it properly.
 
Yeah MS is working on bringing Ray Tracing to the DX12 spec and it is currently available in the insider preview builds for windows10 developers.

Well, it's all released -- they aren't still working on it. Even the link you posted says you need windows 1809 or later. It doesn't run well on regular gpu cores because it is very branchy code. Nothing to do with precision. In fact the regular gpu cores are involved in parts of the process -- it's just there is specialized hardware for the BVH (Bounding volume hierarchy) calculation. AMD's RT GPUs will likely work very similarly.
 
Well, it's all released -- they aren't still working on it. Even the link you posted says you need windows 1809 or later. It doesn't run well on regular gpu cores because it is very branchy code. Nothing to do with precision. In fact the regular gpu cores are involved in parts of the process -- it's just there is specialized hardware for the BVH (Bounding volume hierarchy) calculation. AMD's RT GPUs will likely work very similarly.
It works on current AMD hardware using a DXR Fallback layer and lets AMD cards emulate ray tracing. However AMD has not updated their drivers to use the DXR fallback layer because rumours say they were getting single digit FPS when doing so.
https://www.dsogaming.com/news/amd-...ay-tracing-via-microsofts-dxr-fallback-layer/

Edit.
Apparently AMD did enable it for a short time then disabled it in their drivers shortly after. Managed to find one video from last year benchmarking it.
 
Last edited:
From what I've heard is that AMD and Microsoft were working together on Ray-Tracing and Nvidia jumped ahead when they heard of what they're working on. I don't remember where I heard this. You can tell that Nvidia threw their Ray-Tracing technology together when the frame rate of games dramatically drop with the feature on. Despite Nvidia's cash on hand, their Ray-Tracing technology is awful, and it shows because only a handful of games utilize it. Despite Nvidia's efforts, the sales of the RTX cards are weak compared to previous generation of GPU's, probably due to higher prices. Nvidia has spent their money on other failed ventures like self driving technology with Telsa, and their Tegra mobile chips, so it isn't like they spent all of it on GPU technology.
RTX cards are selling very well, and it has a nice chunk of the self driving market, tegra lives there and in AI.

So I wouldn't call it a failure

They did failed miserably in the cell phone market
 
Hopefully its also 2x rtx perdormance. That's where its needed the most
I'm hoping for well more than 2x RT performance. That's needed for it to go 'mainstream' from a developer perspective -- for RTX2070-levels of RT in a hypothetical RTX3050Ti, for example.
 
Despite Nvidia's cash on hand, their Ray-Tracing technology is awful, and it shows because only a handful of games utilize it.

Keep in mind that, just as we have seen in the CPU and GPU markets, we're seeing some consolidation in game engines as well. Whereas every new game used to use a new engine, what we see today is that the majority of games are powered by only 3-4 engines. This makes for slower adoption of all new features (and hits both GPU makers similarly) because the game developers now have to wait for the engine developers to offer support before the game developers can go about implementing that. At the same time, the development cycles for both the games and the engines keeps extending further and further.
 
Actually tensor cores can be used for raytracing.

Not directly.

Tensor cores are used to boost framerate/resolution in the latest version of DLSS ( implemented only in Youngblood & Deliver Us The Moon so far)

RT impacts framerate & resolution. So DLSS helps here to make RTX games playable, esp on low end hardware such as RTX2060.

So Tensor cores help to make Ray Tracing games playable but it is the RT cores (maybe a modified version of shader cores + BVH Branching logic hardware) that do the actual Ray Tracing.

The above requires DirectX 12 in windows

Intel has a version that works in DirectX 11 without RT or Tensor cores

extra CPU cores do the branching logic then regular GPU shaders do the RT work. This is implemented in World Of Tanks, a DirectX 11 game
 
Last edited:
I'm hoping for well more than 2x RT performance. That's needed for it to go 'mainstream' from a developer perspective -- for RTX2070-levels of RT in a hypothetical RTX3050Ti, for example.

This is my thought, not based in any technical/rational analysis but purely based on amateur psycho analysis

Jensen has such a large ego that he can't allow a console to have better GPU

So by next March expect an RTX 3060 with RTX 2080 super level of performance for $350

which implies an RTX 3050 ti, with RTX 2070 super level of performance, by next july for approx $280
 
Not directly.

Tensor cores are used to boost framerate/resolution in the latest version of DLSS ( implemented only in Youngblood & Deliver Us The Moon so far)

RT impacts framerate & resolution. So DLSS helps here to make RTX games playable, esp on low end hardware such as RTX2060.

So Tensor cores help to make Ray Tracing games playable but it is the RT cores (maybe a modified version of shader cores + BVH Branching logic hardware) that do the actual Ray Tracing.

The above requires DirectX 12 in windows

Intel has a version that works in DirectX 11 without RT or Tensor cores

extra CPU cores do the branching logic then regular GPU shaders do the RT work. This is implemented in World Of Tanks, a DirectX 11 game

Again, tensor cores CAN be used for raytracing. That's how it all started with DXR and Volta. Remember the SW reflections demo? It was done on volta initially.
Shaders can also be used for that matter. Both are simply much less efficient.

On RTX cards, Tensor cores are used for denoising and DLSS as you mentioned among other things.

To make it clear. Pascal can do RT with Shaders, Volta with tensor cores and Turing with RTX cores.
 
Think a little harder here. Why did the mining craze benefit Nvidia but not AMD? Because AMD's cards are too slow. Simple as that. Nvidia is charging more for an inherently superior product. The other bit which I've mentioned a thousand times is that your baseline company, AMD, sells every single GPU for a loss. If not for their CPU business being able to subsidize their GPU losses, AMD would either be out of business entirely or would have raised their prices.

what are you talking about? amd blew nvidia out the water for mining. they sold every gpu they could produce. that's why during the mining craze if you COULD get an amd card it cost twice as much and THAT'S part of the reason why nvidia has a bigger share of the gaming market.
 
Think a little harder here. Why did the mining craze benefit Nvidia but not AMD? Because AMD's cards are too slow.

Just for the record, this is total bullshit. Everyone benefited during the mining era. AMD destroyed everything Nvidia when it came to mining so they were picked up first and when none were left THEN miners started picking up 1070s and 1080s.
 
This is my thought, not based in any technical/rational analysis but purely based on amateur psycho analysis

Jensen has such a large ego that he can't allow a console to have better GPU

So by next March expect an RTX 3060 with RTX 2080 super level of performance for $350

which implies an RTX 3050 ti, with RTX 2070 super level of performance, by next july for approx $280
You can always dream. Performance is no prob, but the price...

AMD can't have their GPU fall behind consoles either, so the best I would expect is to go head to head with midrange cards.
 
Just for the record, this is total bullshit. Everyone benefited during the mining era. AMD destroyed everything Nvidia when it came to mining so they were picked up first and when none were left THEN miners started picking up 1070s and 1080s.
nvidia ended being much more profitable than AMD once gpu miners were optimized for it. but AMD had a HUGE headstart and earned itself a reputation as a mining card. So even when nvidia ultimately became better, many people would still get AMD.

BTW nvidia still sold more cards than AMD, go figure.
 
nvidia ended being much more profitable than AMD once gpu miners were optimized for it. but AMD had a HUGE headstart and earned itself a reputation as a mining card. So even when nvidia ultimately became better, many people would still get AMD.

BTW nvidia still sold more cards than AMD, go figure.

You're out to lunch

No one made miners for Nvidia GPUs cause they couldn't mine well enough to be profitable

Then when Bitcoin, ethereum, litecoin, monero etc all started to moon, people were looking to mine on anything. The more cards, the better. Even if Nvidia cards were not nearly as efficient as AMD cards, you still could make money with them because the markets were booming

Nvidia always sells more cards than AMD, mining or not. They're much bigger.
 
Yeah MS is working on bringing Ray Tracing to the DX12 spec and it is currently available in the insider preview builds for windows10 developers. NVidia has the instructions on their site how to install the tools to get it going.
https://developer.nvidia.com/rtx/raytracing/dxr/DX12-Raytracing-tutorial-Part-1
From what I understand it still uses the RT cores and the eventual AMD implementation will also require their own version of the RT cores as well. The math involved is specialized enough that using the generic shader cores can replicate it to some degree by faking or approximating some values but they don’t have the precision needed to do it properly.
DXR was added to DirectX in W10 1809 released October 2018. Microsoft first revealed DXR publicly in March 2018.

https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/
 
You're out to lunch

No one made miners for Nvidia GPUs cause they couldn't mine well enough to be profitable

Then when Bitcoin, ethereum, litecoin, monero etc all started to moon, people were looking to mine on anything. The more cards, the better. Even if Nvidia cards were not nearly as efficient as AMD cards, you still could make money with them because the markets were booming

Nvidia always sells more cards than AMD, mining or not. They're much bigger.
IIRC my GTX1070 did like 5 Mh/s when I first tried mining ethereum, later I used Cudaminer and it jumped to 25 and ended up doing 32MH/s
 
Interesting. When the dust settles, I expect there will be a lot of us with PS5's and Ampere under the tree this year. I plan on spoiling myself (Ampere) and the kids (PS5).
 
Back
Top