AMD Presents New Horizon

Machine learning is machine learning, more accurate branch prediction and caching algorithms have nothing to do with it >_> technical marketing gobbledygook is a scourge

Uh you know those algorithms rely on statistical methods, right? That's all machine learning is.
 
My core i5-2500 took 3 minutes 31 seconds to render it.
latest


Sorry, I just couldn't resist.
 
This is the first time I can recall AMD spending so much effort improving their pre-fetch since Phenom/Phenom II.

From Thunderbird To Barton, AMD just rode the "we got on-die cache, we gots enough bandwidth to do anything!"

Then they just rode the on-die memory controller, and left the cache algorithms mostly untouched through the Athlon 64 X2.

They didn't start to care about prefetch until Intel embarrassed them with an off-die memory controller plus large L2 cache with intelligent prefetch. IT took until the Phenom II for them to finally get this working well.

Since Phenom, AMD's cache performance has been STAGNANT, while Intel's has shined. It's nice to see them realize that now!
 
Uh you know those algorithms rely on statistical methods, right? That's all machine learning is.
AMD%20Zen%20December%202016%20Update_Final%20For%20Distribution-page-018_575px.jpg

If I program a computer to count cards in a card game of your choosing, using camera feeds for example, and play the best hand based on a statistical model, is that then a machine learning experiment ?

They specifically mention neural net prediction, what is this running on ? Where is the data it was trained on ? Does the inferencing interfere with CPU performance ?
 
Getting a bit nervous about AMD making a habit of comparing againat a $1000 chip...


Like seriously: AMD has a history of making the worse possible decisions when handed golden tickets.

The ONLY way they are going to make buckets of cash from this new CPU is if it is so cheap that it's a plain, simple, no-other-choice move against buying the competition. Just like AMD's video cards: people will ALWAYS buy the competition when performance and price are equal. The genral consensus is that MOST people will only jump the Nvidia ship when the price/performance is close to 60% on a comparably quick AMD card. In other words, AMD has to offer the same speed at 2/3 the price, or offer 30% faster performance on a similarly priced card.

This puts AMD at a HUGE disadvantage . NVidia's mindshare is so strong that AMD needs to outright surpass Nvidia by a huge, obvious and undeniable margin before average Joe Gamer will even consider moving from the USDA approved family-friendly Nvidia brand.

And my guess is that this is even MORE severe in the CPU space. My guess is that average 'buy their stuff from bestbuy' people wont even know about Ryzen CPUs unless they breach a 50% price/performance ratio. And that is just knowing about it: imagine what its going to take to get them to even CONSIDER buying some strange, alternative, 'best value' plastc-bag-cerial brand called 'ATM or something' versus the familly freindly, 100% American, plays ads on TV with a cute jingle, has their product names embedded in every consumer's mind brand Intel.

The fact that AMD is trying to set the expectation that Ryzen is worth $1000 is making me REALLY worried that they are going to shit the bed and ACTUALLY price Ryzen CPUs in the X99 CPU range.

If the biggest-dick-in-the-room 3.4Ghz 8-core unlocked mega chip is a single penny over $399.99, it will collect dust on shelves. It's not fair, but it's the truth of the market right now.



Also, AMD put out the demo of the Titan X SLI at 4K, one running on intel 6900k, the other on Ryzen, and said "you cant tell the difference !"
...
This is AMD here: if their CPU was ACTUALLY FASTER in games, they would milk that PR teet so dry that it would be ALL that we would be talking about. instead, AMD's CPU is shown to perform the same as an already inferior CPU for gaming in a heavily GPU bottlenecked workload. TitanX SLI at 1440p 144Hz would have been a MUCH more realistic and CPU -dependant workload.
 
Last edited:
5820K at 4GHz/2133 - 56.13 seconds Ryzen Blender.

This is my desktop work machine with like 1,000 other programs running in the background as well. Hardly a real benchmark score I would put my rep on.....
 
AMD%20Zen%20December%202016%20Update_Final%20For%20Distribution-page-018_575px.jpg

If I program a computer to count cards in a card game of your choosing, using camera feeds for example, and play the best hand based on a statistical model, is that then a machine learning experiment ?

They specifically mention neural net prediction, what is this running on ? Where is the data it was trained on ? Does the inferencing interfere with CPU performance ?

I know a thing or two about AI and statistical modeling as I use it to get machines to build themselves. And it is very possible to take a small series of micro-ops and data to form a decision vector then execute based on the scaler result.
 
I know a thing or two about AI and statistical modeling as I use it to get machines to build themselves. And it is very possible to take a small series of micro-ops and data to form a decision vector then execute based on the scaler result.

Would you call that a neural network ? I am not debating the relevance of statistical modelling in branch prediction or caching algorithms, i am simply debating the accuracy of their terminology.

It sounds like what they're really saying is "improved branch prediction" but they wanted to hype it up
 
Getting a bit nervous about AMD making a habit of comparing againat a $1000 chip...


Like seriously: AMD has a history of making the worse possible decisions when handed golden tickets.

The ONLY way they are going to make buckets of cash from this new CPU is if it is so cheap that it's a plain, simple, no-other-choice move against buying the competition. Just like AMD's video cards: people will ALWAYS buy the competition when performance and price are equal. The genral consensus is that MOST people will only jump the Nvidia ship when the price/performance is close to 60% on a comparably quick AMD card. In other words, AMD has to offer the same speed at 2/3 the price, or offer 30% faster performance on a similarly priced card.

This puts AMD at a HUGE disadvantage . NVidia's mindshare is so strong that AMD needs to outright surpass Nvidia by a huge, obvious and undeniable margin before average Joe Gamer will even consider moving from the USDA approved family-friendly Nvidia brand.

And my guess is that this is even MORE severe in the CPU space. My guess is that average 'buy their stuff from bestbuy' people wont even know about Ryzen CPUs unless they breach a 50% price/performance ratio. And that is just knowing about it: imagine what its going to take to get them to even CONSIDER buying some strange, alternative, 'best value' plastc-bag-cerial brand called 'ATM or something' versus the familly freindly, 100% American, plays ads on TV with a cute jingle, has their product names embedded in every consumer's mind brand Intel.

The fact that AMD is trying to set the expectation that Ryzen is worth $1000 is making me REALLY worried that they are going to shit the bed and ACTUALLY price Ryzen CPUs in the X99 CPU range.

If the biggest-dick-in-the-room 3.4Ghz 8-core unlocked mega chip is a single penny over $399.99, it will collect dust on shelves.



Also, AMD put out the demo of the Titan X SLI at 4K, one running on intel 6900k, the other on Ryzen, and said "you cant tell the difference !"
...
This is AMD here: if their CPU was ACTUALLY FASTER in games, they would milk that PR teet so dry that it would be ALL that we would be talking about. instead, AMD's CPU is shown to perform the same as an already inferior CPU for gaming in a heavily GPU bottlenecked workload. TitanX SLI at 1440p 144Hz would have been a MUCH more realistic and CPU -dependant worklpad.


Well what ever it comes down to, there will be a limited price war. If Zen can compete with Boardwell at that level, its going to be priced just under it. Personally I would price it the same and let Intel cut prices first so that way AMD can start pressuring Intel, but then again, we haven't see AMD do this, like ever when products are close. They aren't going to give things away, since Intel will just drop prices to stay competitive.
 
While 8/16 cores and threads is neat, do they have any plans for the lower end?
The masses likely don't spend that much on a CPU, and would make do with a decent 4/8 chip at a competitive price point.

I also believe I read somewhere about an APU combining Zen and Rx 460, perfect for casual gaming.
 
While 8/16 cores and threads is neat, do they have any plans for the lower end?
The masses likely don't spend that much on a CPU, and would make do with a decent 4/8 chip at a competitive price point.

I also believe I read somewhere about an APU combining Zen and Rx 460, perfect for casual gaming.


Well they haven't showed us anything they didn't show us before, so why is that?
 
So... it's still running base clocks, basically matching or beating the 6900K ($1000+ chip).
I guess this means Lisa Su sold her soul to the devil or they discovered a magic wand somewhere because that's a miracle.

How much is this thing gonna cost?
 
You sound extremely butthurt over this. What's your deal?
He's not wrong.

Let's imagine Ryzen is legitimately faster in games compared to a 6900k.

How do you think that would go-down, press conference-wise? Wouldn't you think AMD would show off a number of games, at a variety of resolutions, showing that Rysen tops the intel chip every time?

Or would they show a single game at a GPU-bound resolution, and say 'you cant tell the difference'.
 
Crap, while this was going on forgot to set the time on the Anova, FUCK. Damn you AMD.
 
Would you call that a neural network ? I am not debating the relevance of statistical modelling in branch prediction or caching algorithms, i am simply debating the accuracy of their terminology.

It sounds like what they're really saying is "improved branch prediction" but they wanted to hype it up


Everything that we call AI/Machine learning is just guided statistics. The same old stats we've been doing for decades. Centuries in some cases. Even simple linear regression is considered machine learning in the right context.
 
So... it's still running base clocks, basically matching or beating the 6900K ($1000+ chip).
I guess this means Lisa Su sold her soul to the devil or they discovered a magic wand somewhere because that's a miracle.

How much is this thing gonna cost?

Or the cache is that much better.. and the RAM controller.

OR mayb ethey actually did some real work on this CPU instead of letting an automated system do the design.
 
  • Like
Reactions: N4CR
like this
Everything that we call AI/Machine learning is just guided statistics. The same old stats we've been doing for decades. Centuries in some cases. Even simple linear regression is considered machine learning in the right context.


Well depends if they are talking about neural nets or state machines, linear regression is done with state machines not neural nets.

There is a big difference in approach between the two types of systems. Deep learning is based on neural nets, and this is what is true reasoning and why programs have to "learn" to be effective.

A state machine system, is probably want they are using in Ryzen, and that is not real AI. There is a database gathering information, and weights that information and passes that information through state machines and out comes an output. There is no real learning involved.
 
He's not wrong.

Let's imagine Ryzen is legitimately faster in games compared to a 6900k.

How do you think that would go-down, press conference-wise? Wouldn't you think AMD would show off a number of games, at a variety of resolutions, showing that Rysen tops the intel chip every time?

Or would they show a single game at a GPU-bound resolution, and say 'you cant tell the difference'.

Let's give you an hour, and layout what you wanted to truly showcase and see if you had time to show off a number of games at a variety of resolutions.

Regardless, you can't deny that it's looking very good.
 
  • Like
Reactions: N4CR
like this
Let's give you an hour, and layout what you wanted to truly showcase and see if you had time to show off a number of games at a variety of resolutions.

Regardless, you can't deny that it's looking very good.

Really, it was the same thing they showed us before months ago, ok great they put in a 2 minute encoding program in there too. Hmm then they showed a system that is heavily GPU bound to show off CPU performance?
 
I got 58.63s on the second run with my 4930k at 4.7Ghz.
That does not seem right to me compared to my scores.

5820K at 4GHz/2133 - 56.13 seconds Ryzen Blender.

This is my desktop work machine with like 1,000 other programs running in the background as well. Hardly a real benchmark score I would put my rep on.....
 
So my [email protected] does the blender demo in 3m (180s), maybe it's finally time to upgrade after 6+ years? as long as we get native USB 3.1 support as well.
 
Would you call that a neural network ? I am not debating the relevance of statistical modelling in branch prediction or caching algorithms, i am simply debating the accuracy of their terminology.

It sounds like what they're really saying is "improved branch prediction" but they wanted to hype it up

It is possible. Neural network is just a fancy term for several inputs being put into a larger processing machine for an overall decision. Each of these inputs represents a wavelette or linear equation that can be stuffed into a AVX and evaluated. Where the scaler lands after some training will determine if the answer is true or false.
 
He's not wrong.

Let's imagine Ryzen is legitimately faster in games compared to a 6900k.

How do you think that would go-down, press conference-wise? Wouldn't you think AMD would show off a number of games, at a variety of resolutions, showing that Rysen tops the intel chip every time?

Or would they show a single game at a GPU-bound resolution, and say 'you cant tell the difference'.

If they had showed lower resolutions, every gamer would have said that Ryzen processors are only for low end gaming. If you want to sell high end processors with high end price tags, you have to show them performing upper echelon tasks. 4K gaming, VR, and HDR are where the high end tasks; that need solving, exist today.
 
yup then all the usuals would be laughing and saying see I told you its shit! now its showing better then rumored/expected they gotta bitch about something else. no other amd chip can even come close to properly driving a titan xp so that right there is a big improvement and $1000 perf for ~$500 sounds good to me!
 
yup then all the usuals would be laughing and saying see I told you its shit! now its showing better then rumored/expected they gotta bitch about something else. no other amd chip can even come close to properly driving a titan xp so that right there is a big improvement and $1000 perf for ~$500 sounds good to me!

They can't win either way lmao.
 
Back
Top