Leaked Patch Confirms AMD Zen Will Have 32 Cores Per Socket?

No, they're not. The only thing semi-directly related to Mantle was when parts of it were used to speed up Vulkan development, but Vulkan is not Mantle or a descendant of it. glNext had been in development for years, and Vulkan retains many parts from that process. Read the link I posted. It's a very good overview of Vulkan.
DX12 is not based on Mantle. As AnandTech's coverage pointed out, while there are some superficial high level similarities, the details are very different: http://www.anandtech.com/show/7889/...level-graphics-programming-comes-to-directx/2

Vulkan is derived from Mantle https://youtu.be/qKbtrVEhaw8?t=661

As for DX12 it is not inline what MS did before they never took such a big step they never did anything but optimize their old api. https://www.youtube.com/watch?v=sSY2KXBoro0
None of them mention how MS approached them and told that Mantle was a waste of time ...
 
Vulkan is derived from Mantle https://youtu.be/qKbtrVEhaw8?t=661

As for DX12 it is not inline what MS did before they never took such a big step they never did anything but optimize their old api. https://www.youtube.com/watch?v=sSY2KXBoro0
None of them mention how MS approached them and told that Mantle was a waste of time ...

Thats where the whole "Miscrosoft was working on DX12 since, like, 300 BC" claims are bogus in my eyes. In 2013, AMD publicly said that Microsoft had not told them of ANY plans for a DirectX successor to version 11, and that they made it clear there were no plans when asked. You don't make that shit up and announce it publicly.

Source
 
Yes, I've seen the video. AMD gets a thanks for its contribution to the process. As the AnandTech article I've already referenced twice explains, Vulkan is not Mantle. Mantle was used in Vulkan's development and had many parts removed and/or replaced. AT describes it as a bootstrap, and the two are not the same or directly related. The most accurate description I've read is that bits of Mantle are in Vulkan, and even a simple examination of the APIs shows they are not the same.

To put it in perspective for both of you who seem to have a problem understanding this: DX12 was first demonstrated on Nvidia hardware since AMD did not have drivers available. At GDC last year, Vulkan demos were demonstrated on various GPUs, including Intel, Nvidia and ImgTec, but not AMD. Why would AMD lag so badly behind if "vulkan is mantle" or "DX12 is mantle"?
 
Yes, I've seen the video. AMD gets a thanks for its contribution to the process. As the AnandTech article I've already referenced twice explains, Vulkan is not Mantle. Mantle was used in Vulkan's development and had many parts removed and/or replaced. AT describes it as a bootstrap, and the two are not the same or directly related. The most accurate description I've read is that bits of Mantle are in Vulkan, and even a simple examination of the APIs shows they are not the same.

To put it in perspective for both of you who seem to have a problem understanding this: DX12 was first demonstrated on Nvidia hardware since AMD did not have drivers available. At GDC last year, Vulkan demos were demonstrated on various GPUs, including Intel, Nvidia and ImgTec, but not AMD. Why would AMD lag so badly behind if "vulkan is mantle" or "DX12 is mantle"?

The Cynic in me says that Nvidia hands out fistfuls of cash to maintain a public presence, and Intel has been proven, beyond a shadow of a doubt, with legal documentation, to pay out comically large amounts of money to lock out competitors, so...
 
Thats where the whole "Miscrosoft was working on DX12 since, like, 300 BC" claims are bogus in my eyes. In 2013, AMD publicly said that Microsoft had not told them of ANY plans for a DirectX successor to version 11, and that they made it clear there were no plans when asked. You don't make that shit up and announce it publicly.
The problem with that view is that it doesn't take into account the long development cycles of GPU, and that beyond DX11.x capabilities exposed in DX12 across multiple vendors suggest it had been in development years before the announcement. Despite the quicker release rate of GPUs, it still takes years from design to release; multiple GPUs are in the pipeline at once at various stages of development.

It just looks like another AMD whine of the week when it denies it was involved with DX12 for a pretty long time: http://techreport.com/review/26239/a-closer-look-at-directx-12
 
and Intel has been proven, beyond a shadow of a doubt, with legal documentation, to pay out comically large amounts of money to lock out competitors, so...
How's the view from 2005? I have good memories of way back then. :D
 
How's the view from 2005? I have good memories of way back then. :D

Hey, If you're happy to forget these things, good for you! But when I touch a hot stove and get burnt, I remember to not touch stoves before checking: its a critical part of learning. Stoves can still be hot in 11 years, There's not an expiry date for experience. Intel is still Intel. They've paid good money to be where they are.
 
I'm not the only one thinking it.

aRPw3xM_700b.jpg

Hilarious, but I doubt Intel even thinks about AMD these days. I hate to say it, but AMD no longer matters. They are a bit player. They only get press at all on their CPU's because they are the only other x86 cpu maker.

They are headed the same way with their GPU's.:(
 
Hilarious, but I doubt Intel even thinks about AMD these days. I hate to say it, but AMD no longer matters. They are a bit player. They only get press at all on their CPU's because they are the only other x86 cpu maker.

They are headed the same way with their GPU's.:(

Yeah, AMD is insignificant. Intel hasn't made a reactionary move to their products in over a decade. Nvidia is still reacting to AMD to a small degree. The 980 Ti was a bit of a pre-emptive strike.
 
The crew at TechFrag have been digging around in what is believed to be a leaked Linux patch and have come to the conclusion that AMD Zen based processors will feature up to 32 physical cores.

A leaked Linux patch on LKML.org, first spotted by The New Citavia Blog, suggests that AMD Zen based processors will feature up to 32 physical cores. The patch also hints at the similarity of parts of the “Zen” and “Zeppelin” codenames. The Zeppelin codename was first mentioned back in August last year, and parts of the patch identify it as a “family 17h model 00h” CPU.

Steve long of the short is amd is working on a massive exoscale apu that will have 16 cores and 32 threads with a greenland gpu on the interposer with a lot of hmb2. its been making the rounds for the last year.

http://wccftech.com/amd-exascale-heterogeneous-processor-ehp-apu-32-zen-cores-hbm2/

the fact that this code exists proves amd is not pulling punches with zen they are aiming zen as a game changer.
 
I see people still subscribe to the whole "there's no way AMD impacted anything, DX12 was in development for 50 years in spite of the fact that MS themselves denied it before a certain point as did various software devs/companies" theory. I guess it's just a huge coincidence that DX12 and Vulkan didn't really spring up until a good deal of time after Mantle was publicly announced (just because Mantle was announced at that point doesn't mean it wasn't in development for a while before that either, obviously.)

Of course there will be many differences between Mantle and the off-shoots, it has to support expanded techniques/features as well as many more vendors than just AMD. That should be obvious. There are tons of documentation and articles and statements from people where they confirm that yes, Vulkan IS derived from Mantle and so is D3D12 basically, but hey, if self-delusion is what people wanna do, be my guest.
link
another one

"Many companies have contributed to Vulkan and SPIR-V—Mantle gave us a tremendous head start—but Vulkan is definitely a working group design now," Trevett said. "Multiple hardware companies demonstrated early Vulkan drivers at GDC - including Imagination, Intel, and Nvidia—ARM also reported on their driver performance."

You see deliberate phrasing like "gave us [a head start]" and "lays the foundation" but still try to downplay it by saying "oh yeah Mantle was just bits and pieces, not that significant"??

I mean I'm not trying to imply that Mantle was the be-all end-all or anything, just that YES, that API was indeed derived from Mantle. However much it expanded since then is completely besides the point because that was AMD's intention from the get-go, for it to evolve and support just about everybody as an open standard. Mantle's sole purpose was to light a fire under everybody's asses to get stuff like DX12, Khrono's Vulkan, Metal, etc. off the ground. Yes they called it "glNext" before but that's just marketing and even then nobody heard about that until well after Mantle had debuted. Vulkan wasn't even fully spec'd until recently. The 1.0 SDK still hasn't come out yet.

But yeah, go ahead and try to spin it as if these things were in development for five decades or whatever and that AMD had little or nothing to do with it lol.
 
Yes, I've seen the video. AMD gets a thanks for its contribution to the process. As the AnandTech article I've already referenced twice explains, Vulkan is not Mantle. Mantle was used in Vulkan's development and had many parts removed and/or replaced. AT describes it as a bootstrap, and the two are not the same or directly related. The most accurate description I've read is that bits of Mantle are in Vulkan, and even a simple examination of the APIs shows they are not the same.

To put it in perspective for both of you who seem to have a problem understanding this: DX12 was first demonstrated on Nvidia hardware since AMD did not have drivers available. At GDC last year, Vulkan demos were demonstrated on various GPUs, including Intel, Nvidia and ImgTec, but not AMD. Why would AMD lag so badly behind if "vulkan is mantle" or "DX12 is mantle"?

HOLY CRAP!!! I can not believe you tried that bull, completely laughable. The Nvidia DX12 showcase was about the lighting and postprocessing aspects of the higher tiered DX12 features (hence the ensuing Features vs Function debates). It was well known that AMD support dx12 out of the gate but not the higher tiered Feature sets.

And for the love of God, just admit Mantle gave birth to both Vulkan and DX12. They aren't direct copies and only a moron would think so but they are Evolved versions of it. Like a stair case Mantle is the first step.
 
DX12 didn't come only from Mantle, and Mantle was evolved from Dx11.x and Sony's PS4 low level API, GNM and libGCM *ps3's low level API.

Vulkan is a farther step away from any other low level API's even more than Dx12. But sadly probably won't be picked up by PC game developers much, since DX12 will already be being integrated into engines well before Vulkan is even ready. And since Linux gaming isn't the forefront of game developers interests, that isn't going to push Vulkan much either. Now if Vulkan came out before Dx12, that would have been nice, but that didn't happen.

Even though many people "think" mantle started the low level API push, it wasn't, Xbox one's API was before, and then you have PS4 and before that PS3. Consoles before PS3 had lower level API's too, the reason for this is because its a closed system, it was easier to do low level API's with them. You won't get the issues like you see with async, command buffer access, etc in a closed system vs. an open system.

And what does this have to do with CPU's lol.
 
DX12 didn't come only from Mantle, and Mantle was evolved from Dx11.x and Sony's PS4 low level API, GNM and libGCM *ps3's low level API.
Reference please. My understanding is that mantle was a clean start.
 
Reference please. My understanding is that mantle was a clean start.

From what I understand, the Gamecube, Xbox 360, Wii, PS4, Xbone and Wii U all had similar, but not identical APIs for graphics. AMD used these as a springboard for developing a PC-centric low-level API. That does not mean Mantle was a copy-paste job from those APIs, but rather it has heritage in those older consoles.
 
I really hope AMD isn't trying to launch a 32 core processor for the consumer market.. general task/gaming on an intel xeon 14C/28T is pretty abysmal due to lack of proper software. Virtualization on the other hand..

Though on the Opteron side, I can barely give away the darn things. :(

Did you know that cry engine games can use up to 22 cores of course not. Dude/Dudette you should try editing your config files, .ini and xml files not to forget properly set up your system in the bios settings etc do those things and games will fully utilise your xeon system
 
For my job I can see these being nice. One less Proc to worry about in most systems.
 
Steve long of the short is amd is working on a massive exoscale apu that will have 16 cores and 32 threads with a greenland gpu on the interposer with a lot of hmb2. its been making the rounds for the last year.

http://wccftech.com/amd-exascale-heterogeneous-processor-ehp-apu-32-zen-cores-hbm2/

the fact that this code exists proves amd is not pulling punches with zen they are aiming zen as a game changer.

Not really. AMD already has 16+ core CPUs in the server space, and Skylake is known to have at least a 28 core CPU in the works. Server workloads always benefit from more cores. 16/32 core/thread would actually be small pickings compared that what intel already has out there, honestly.
 
Reference please. My understanding is that mantle was a clean start.


Unless you have signed contracts with Sony and have access to their SDK, I can't share more info than that, you can search and find xbox one SDK and look into that though.

Mantle was not a clean slate, it was derived from other API's, but it doesn't matter since it was the first low level API for PC's.
 
Yes, I've seen the video. AMD gets a thanks for its contribution to the process. As the AnandTech article I've already referenced twice explains, Vulkan is not Mantle. Mantle was used in Vulkan's development and had many parts removed and/or replaced. AT describes it as a bootstrap, and the two are not the same or directly related. The most accurate description I've read is that bits of Mantle are in Vulkan, and even a simple examination of the APIs shows they are not the same.

To put it in perspective for both of you who seem to have a problem understanding this: DX12 was first demonstrated on Nvidia hardware since AMD did not have drivers available. At GDC last year, Vulkan demos were demonstrated on various GPUs, including Intel, Nvidia and ImgTec, but not AMD. Why would AMD lag so badly behind if "vulkan is mantle" or "DX12 is mantle"?

GUYS stop beating this dead horse http://www.pcworld.com/article/2894...ises-from-the-ashes-as-opengls-successor.html
fact none was looking at a to the metal api till amd pushed the industry that way...

BACK ON TOPIC
 
Yes, I've seen the video. AMD gets a thanks for its contribution to the process. As the AnandTech article I've already referenced twice explains, Vulkan is not Mantle. Mantle was used in Vulkan's development and had many parts removed and/or replaced. AT describes it as a bootstrap, and the two are not the same or directly related. The most accurate description I've read is that bits of Mantle are in Vulkan, and even a simple examination of the APIs shows they are not the same.

To put it in perspective for both of you who seem to have a problem understanding this: DX12 was first demonstrated on Nvidia hardware since AMD did not have drivers available. At GDC last year, Vulkan demos were demonstrated on various GPUs, including Intel, Nvidia and ImgTec, but not AMD. Why would AMD lag so badly behind if "vulkan is mantle" or "DX12 is mantle"?

Stop deflecting, it is annoying answers given returned with more questions ...
 
Unless you have signed contracts with Sony and have access to their SDK, I can't share more info than that, you can search and find xbox one SDK and look into that though.Mantle was not a clean slate, it was derived from other API's, but it doesn't matter since it was the first low level API for PC's.

What is known about Mantle is that it took their Que from how consoles have been moving forward with the hindsight of how badly current(PC) driver models are and how much more performance can be handled by parties interested in doing "all the work" while AMD provide a very thin layer . Mantle "copied" the way shaders are programmed under DX and that is about it.
 
What is known about Mantle is that it took their Que from how consoles have been moving forward with the hindsight of how badly current(PC) driver models are and how much more performance can be handled by parties interested in doing "all the work" while AMD provide a very thin layer . Mantle "copied" the way shaders are programmed under DX and that is about it.


Mantle copied many parts from other API's as well, command queue manipulation and buffering were done with PS3's API, as you stated shaders from Dx, etc.

Programming low level is hardly new, and the old problems that occurred with low level programming in the past will become "new" problems today. This is why higher level API's will still have their place in future hence why Windows 10 still supports older driver models.

While AMD states it drivers are a thin layer (thinner than before is a better way of putting it than very thin), there is stills some abstraction that goes on at the API and driver level, otherwise there would be no backward compatibility or cross vendor compatibility what so ever.
 
I'm not the only one thinking it.

Trolls will be trolls you do realize amd's zen cpu uses SMT like intel opposed to CMT... also this monster apu is aimed at the ultra high end HPC market not consumer.

"The APU, dubbed an “Exascale Heterogeneous Processor” or EHP for short is the mother of all APUs with 32 Zen Cores, an absolutely huge Greenland graphics die and upto 32 GB of HBM2 memory – all on a 2.5D interposer"

http://wccftech.com/amd-exascale-heterogeneous-processor-ehp-apu-32-zen-cores-hbm2/

FACTS>TROLLING
 
Trolls will be trolls you do realize amd's zen cpu uses SMT like intel opposed to CMT... also this monster apu is aimed at the ultra high end HPC market not consumer.

"The APU, dubbed an “Exascale Heterogeneous Processor” or EHP for short is the mother of all APUs with 32 Zen Cores, an absolutely huge Greenland graphics die and upto 32 GB of HBM2 memory – all on a 2.5D interposer"

http://wccftech.com/amd-exascale-heterogeneous-processor-ehp-apu-32-zen-cores-hbm2/

FACTS>TROLLING

Holy crap never seen this before today. I hope AMD doesn't croak before they release it.
 
Holy crap never seen this before today. I hope AMD doesn't croak before they release it.

Amd wont die they are not in any real trouble till the 2019/2020 fab bonds become due. All the spin about them in a death roll is just trolling.
 
Trolls will be trolls you do realize amd's zen cpu uses SMT like intel opposed to CMT... also this monster apu is aimed at the ultra high end HPC market not consumer.

"The APU, dubbed an “Exascale Heterogeneous Processor” or EHP for short is the mother of all APUs with 32 Zen Cores, an absolutely huge Greenland graphics die and upto 32 GB of HBM2 memory – all on a 2.5D interposer"

http://wccftech.com/amd-exascale-heterogeneous-processor-ehp-apu-32-zen-cores-hbm2/

FACTS>TROLLING

Ah, I understand, so Zen will go in lots of things. This is one of the things Zen will go into, and it will have 32 cores, a big ass GPU and lots of RAM on an interposer, for integration into supercomputers in large quantities.

Zen will also go into other things, like AM4 based consumer PC's and as of yet undetermined socket, enterprise server platforms, but just because the super-computer oriented part has a 32 core count does NOT mean the desktop AM4 variety can support that many, and it probably won't.

The enterprise server socket might though, but as is typical for server parts with high core counts, they will likely be clocked MUCH lower than their desktop counterparts, in order to fit into a thermal envelope that is feasible to cool in one socket.
 
Amd wont die they are not in any real trouble till the 2019/2020 fab bonds become due. All the spin about them in a death roll is just trolling.

Not really.

Their cash on hand keeps shrinking. Recent layoffs were likely in order to keep their burn rate down so they can survive until new products bring income.

In their last quarterly report AMD had $785 million in cash on hand. Previous "analyst" speculation has been that they start seeing some real trouble if that figure drops below ~$600 million before Zen starts driving volume revenue. I am not in a position to assess the validity of this claim, but it doesn't sound unreasonable to me.

On the positive side, cash on hand was up $30m last quarter despite the operating losses of $49m due to "improved operating cash flow". Some amount of that was likely helped by the layoffs.

AMD is currently operating like a startup. A very large startup, but a startup none the less, with the need to keep an eye on their burn rate, and release products before their cash on hand runs out.

The Fab bonds are a different, and huge issue once they come due, but if they are successful with Zen and next gen graphics products, hopefully they can either raise enough money to pay some of those off, or at least look healthy enough to get some replacement bonds in place.
 
Zarathustra[H];1042121660 said:
The Fab bonds are a different, and huge issue once they come due, but if they are successful with Zen and next gen graphics products, hopefully they can either raise enough money to pay some of those off, or at least look healthy enough to get some replacement bonds in place.

Those bonds are unsecured, and very risky in that they have to have the total amount in hand 60 days prior to maturity otherwise AMD defaults. So they need to keep what cash they have in hand right now, just in case things don't go as planned with Zen and upcoming GPU.
 
Those bonds are unsecured, and very risky in that they have to have the total amount in hand 60 days prior to maturity otherwise AMD defaults. So they need to keep what cash they have in hand right now, just in case things don't go as planned with Zen and upcoming GPU.

Ahh, that might be where the $600m figure comes from.
 
Unless you have signed contracts with Sony and have access to their SDK, I can't share more info than that, you can search and find xbox one SDK and look into that though.

Mantle was not a clean slate, it was derived from other API's, but it doesn't matter since it was the first low level API for PC's.

I think I misunderstood. What I meant by 'clean' was that it didn't maintain API/ABI compatibility with anything prior. I can totally buy that it was inspired by prior work.
 
Did you know that cry engine games can use up to 22 cores of course not. Dude/Dudette you should try editing your config files, .ini and xml files not to forget properly set up your system in the bios settings etc do those things and games will fully utilise your xeon system

That's great to know! Unfortunately I haven't really played much in the way of CryEngine games, but Star Citizen is on my list. Thanks!
 
Can't wait for the future tech of 128 cores. Maybe by that time we'll finally start seeing the first quad-core optimized game.
 
Can't wait for the future tech of 128 cores.

Hopefully you will not have to take a second mortgage on your house to afford such a processor.. I do not expect something like this to be anything other than enterprise for decades.
 
Can't wait for the future tech of 128 cores. Maybe by that time we'll finally start seeing the first quad-core optimized game.

We need to put the "bad code optimization" line of thinking to bed once and for all.

Don't get me wrong, there are plenty of examples of bad code hampering performance of many-core CPU's, but we are no longer where we were in 2007 and 2008 when AMD Phenom, Phenom II and Intel's Yorkfield (Core 2 Quad) Bloomfield (i7-9xx), Lynnfield (i7-8xx) made quad cores more commonplace overnight.

back then (or if you play older titles today) it was very commonplace for games to pin one thread and not load any of the others. These days most titles try to offload different features to different cores.

The problem we have now is less of a "lazy programmer/developer" problem people complained about in those early quad core days, and more of a "Computer Science fundamentals" problem.

In any given program, depending on what it does, only a certain percentage of the code is a candidate for multithreading. Some of the code simply doesn't benefit from it no matter how skilled the programmer, other code actually FAILS when you try to multithread it, due to thread locks and other problems.

So, there is only a certain percentage of code that is a candidate for multithreading in the first place. In something like encoding/rendering/scientific number-crunching type operations, that percentage is very very high. In other applications - however - it is typically much much lower.

Amdahl's law shows how the percentage of code that can be multithreaded will scale with core count. While encoding/rendering operations may follow the green 95% line, the theoretical max for multithreading for most types of games is probably closer to the blue 50% line, and as you can see the gains as we increase over 4 cores are rather limited. The gains as we increase over 8 cores are tiny. Over 16 cores they are negligible.

AmdahlsLaw.jpg


This is not a matter of developers or programmers just putting in the extra work and then it will work. If no more than 50% of your code base is threadable scaling will be mostly unnoticeable over 4 cores, and end over 16 cores, no matter how good and motivated your programmer is.

Because of this, I don't think the future of desktop computers involves these super high core count chips. There may be a niche market for them for people who do a lot of encoding/rendering, but for typical use and for games, I don't see there being a market for much above 8 cores, not now, not ever.

This is a fundamental of how software works, not a "it hasn't been done yet" type problem.
 
This is a fundamental of how software works, not a "it hasn't been done yet" type problem.

Well there was no need to do it since everything revolves around the same structure. Now that parallel code can have better impact in more areas it is bound to influence performance as what BeOS did to Linux and Windows ....

We are seeing benefits today and with Mantle, Vulkan and DX12 (and Metal) were seeing a shift this is not something that is futile
 
What is known about Mantle is that it took their Que from how consoles have been moving forward with the hindsight of how badly current(PC) driver models are and how much more performance can be handled by parties interested in doing "all the work" while AMD provide a very thin layer . Mantle "copied" the way shaders are programmed under DX and that is about it.

There is a danger though: As time goes on, the GPU hardware design is going to change. And the low-level assumptions made by game developers on how GPUs execute will not remain true. I fully expect as GPUs get re-designed, you will see performance loss, if not games outright no longer being able to run. Consoles can get away with low-level APIs because they have exactly ONE hardware spec, and future compatibility does not matter. On PCs, this is not the case.

Ahh, that might be where the $600m figure comes from.

The $600 Million figure comes from AMDs debt payments that come due in 2019. They need to have the cash on hand to pay off that figure.

Point being, loosing upwards of $100 Million per quarter, when the total valuation of the company is only about $750 Million or so, with a $600 Million debt payment due in just three years is what's giving investors a lot of pause. AMD has been cutting itself apart trying to make Zen, which is realistically it's last chance to turn a profit and hopefully raise the necessary cash.

This is also why I believe that making the GPU unit a standalone business entity was preparation for potentially selling it off. If Zen doesn't make enough money, selling off the old ATI unit is pretty much the last thing AMD could do to raise the necessary cash to meet it's payments.

In addition, a farther $400 million in debt payments comes due in 2020.

Well there was no need to do it since everything revolves around the same structure. Now that parallel code can have better impact in more areas it is bound to influence performance as what BeOS did to Linux and Windows ....

We are seeing benefits today and with Mantle, Vulkan and DX12 (and Metal) were seeing a shift this is not something that is futile

Most game code is serial. You simply aren't going to see performance budge by trying to make serial code parallel, and more likely the reverse.

Async compute will increase GPU side performance via keeping the GPU at full load easier in certain workloads. But CPU side, there really isn't much that can be done.
 
Most game code is serial. You simply aren't going to see performance budge by trying to make serial code parallel, and more likely the reverse.

Async compute will increase GPU side performance via keeping the GPU at full load easier in certain workloads. But CPU side, there really isn't much that can be done.

And that is how it used to be see how well Battlefield 4 runs on a PS4 it can only get better. And you still want to keep up the serial aspect of it then BF4 simply would not have existed on consoles at all ..
 
Back
Top