funny how AMD is all in the news when the stock dropped more than 40% in two weeks.

Feel sorry who ever owned this stock

Volume combined with good supply and demand zones, lots of opportunity, just need a solid rule set to make the trades.
 
Gimick marketing.

Nvidia has had "RTX" tech since their volta chips. They can call there tensor units "RTX ray tracing McGimmick" all they like it doesn't change anything. Its just 16 bit tensor hardware. Nvidia wanted to find a use for it as it is baked into volta and beyond and finding a consumer use for it is logical. (They are not ever designing a real ground up game chip ever again, on one is NV, AMD, Intel. Money is in the server market.) No one wants to leave 20-30% of a die doing nothing when they use those same chips in consumer parts, or worse fuse it off so people can't use consumer cards over their big expensive server tensor enabled parts. So you have to hand it to NV Marketing they got out there and spun a great story. (I have ZERO doubt the game developers that have been talking about RTX... have also had their games running on beta AMD hardware.)

From AMDs MI60/50 release;
"ROCm has been updated to support the TensorFlow framework API v1.11.....
These low-level instructions implement compute operations all the way from single bit precision to 64-bit floating point. The most beneficial instruction for the acceleration of deep learning training is a float 16 dot product which accumulates into a 32-bit result, maintaining the accuracy of the operation."

AMD can call their machine learning (tensor flow bits) super compute units or whatever they want. Bottom line its the same type of hardware you find on Nvidias volta and beyond server parts as well. The bottom line is Tensorflow is a google thing... they opensourced it and it became a major standard for use in machine learning. Both NV and AMD have built hardware to accelerate it. (don't let marking fool you... NV isn't doing anything new or unique they didn't invent tensor flow... they are simply building to Googles API like everyone else is / will be, Intel is on the way as well and no doubt there cards will also be aimed at the server market and the tensorflow api)

When AMD releases their next consumer card. Don't worry all the partial real time ray tracing developers are talking about will work just fine... and most likely via Vulkan for either vendor.

It doesn't matter. If they can't stick Ray Tracing on the box as a selling point it won't sell versus a 2080. They need either more speed for the same money or cost less for the same speed. Remember AMD is the underdog. As much as I like AMD over NVIDIA, they have that NVIDIA's halo performance & feature halo to punch through.
 
funny how AMD is all in the news when the stock dropped more than 40% in two weeks.

Feel sorry who ever owned this stock

Everybody's stock dropped. Intels NVIDIAs etc. That said I heard rumors of the Zen 2 launch end of October over 2 months ago when stocks were flying.
 
Gimick marketing.

Nvidia has had "RTX" tech since their volta chips. They can call there tensor units "RTX ray tracing McGimmick" all they like it doesn't change anything. Its just 16 bit tensor hardware. Nvidia wanted to find a use for it as it is baked into volta and beyond and finding a consumer use for it is logical. (They are not ever designing a real ground up game chip ever again, on one is NV, AMD, Intel. Money is in the server market.) No one wants to leave 20-30% of a die doing nothing when they use those same chips in consumer parts, or worse fuse it off so people can't use consumer cards over their big expensive server tensor enabled parts. So you have to hand it to NV Marketing they got out there and spun a great story. (I have ZERO doubt the game developers that have been talking about RTX... have also had their games running on beta AMD hardware.)

From AMDs MI60/50 release;
"ROCm has been updated to support the TensorFlow framework API v1.11.....
These low-level instructions implement compute operations all the way from single bit precision to 64-bit floating point. The most beneficial instruction for the acceleration of deep learning training is a float 16 dot product which accumulates into a 32-bit result, maintaining the accuracy of the operation."

AMD can call their machine learning (tensor flow bits) super compute units or whatever they want. Bottom line its the same type of hardware you find on Nvidias volta and beyond server parts as well. The bottom line is Tensorflow is a google thing... they opensourced it and it became a major standard for use in machine learning. Both NV and AMD have built hardware to accelerate it. (don't let marking fool you... NV isn't doing anything new or unique they didn't invent tensor flow... they are simply building to Googles API like everyone else is / will be, Intel is on the way as well and no doubt there cards will also be aimed at the server market and the tensorflow api)

When AMD releases their next consumer card. Don't worry all the partial real time ray tracing developers are talking about will work just fine... and most likely via Vulkan for either vendor.

Excuse me, but Turing has BOTH Tensor cores AND RTX cores. Yes you can do RT with Tensor cores alone (like Volta) or even without it (like Pascal), but its not nearly as efficient.

This doesn't mean AMD can't do RT with just Tensor cores, or compute cores for that matter. But I think AMD will need some extra magic sauce to make it faster.
 
The more I look at it, the more it looks like just a die-shrink.

While on paper it looks like it could stand between a Tesla P100 and a Tesla V100, I think it will fall behing the P100 and meet it at best.

Still no small feat for mostly an increased clockspeed and some tweaking.
 
The more I look at it, the more it looks like just a die-shrink.

While on paper it looks like it could stand between a Tesla P100 and a Tesla V100, I think it will fall behing the P100 and meet it at best.

Still no small feat for mostly an increased clockspeed and some tweaking.

You're smoking crack. The changes are very significant in how memory and CCX latencies are handled.
 
You're smoking crack. The changes are very significant in how memory and CCX latencies are handled.
Hence the some tweaking part.

Duh...

I realize I'm oversimplificating. Its like saying that Turing is tweaked volta + RTX cores, Which isn't far from the truth.
 
Hence the some tweaking part.

Duh...

I realize I'm oversimplificating. Its like saying that Turing is tweaked volta + RTX cores, Which isn't far from the truth.

We're not talking a i7 7770 -> i7 8770 here. That's a tweek. This is more akin from the first Core 2 to i7 which is pretty damn significant.
 
  • Like
Reactions: GHRTW
like this
We're not talking a i7 7770 -> i7 8770 here. That's a tweek. This is more akin from the first Core 2 to i7 which is pretty damn significant.


Actually i7 7770 -> i7 8770 would be a good example. Maybe not quite, but close.

Its pretty much the same VEGA architecture plus FP64 and int 8 cores (AMD version of tensor cores I guess) shrinked into 7nm and cloked higher. I mean, it has the exact same amount of cores for crying out loud.

Just take a look at VEGA performance specs and multiply it by the factor of increased clockspeeds and see what you get.

No magic sauce here folks, that might not even come with Navi, but with next-gen whatever its called.
 
Actually i7 7770 -> i7 8770 would be a good example. Maybe not quite, but close.

Its pretty much the same VEGA architecture plus FP64 and int 8 cores (AMD version of tensor cores I guess) shrinked into 7nm and cloked higher. I mean, it has the exact same amount of cores for crying out loud.

Just take a look at VEGA performance specs and multiply it by the factor of increased clockspeeds and see what you get.

No magic sauce here folks, that might not even come with Navi, but with next-gen whatever its called.

I'm sorry I'm brain farting. I thought I was on the Epyc Rome post. But you are correct, this is more a refinement with a faster bus. Hopefully there are some fixes and improvements, but I doubt early discard is one of them as this is aimed at AI.
 
Back
Top