Vega Rumors

Maybe not so simple economics.

If someone released a new ETH miner software that did say 10x current rates and only asked for a 4% fee. You think he would make more than Claymore does now, right? However, he wouldn't make anymore than Claymore does now because difficulty rates would increase which would adjust the rewards rates down proportionally. He would probably make less, probably need to charge 20% fee to break even with Claymore. Only advantage is to keep it to yourself, relatively speaking.

Also any software with increase rates would get reverse engineered and copied which I think has happened between Claymore, ethminer and sgminer. They all have about the same rates but Claymore's easier to setup, especially for dual mining.

Also, you can cheat the Claymore devfee and have it sent to yourself, though he makes a lot to be sure.


mining software is open sourced man, Claymore isn't the only one that can do it, ccminer is damn close to Claymore now. Claymore was the first person to do a cuda based ETH miner that can dual mine and that is why his has been so popular. Don't tell me there are programmers out there that are so much better then the over all community of programmers that work on ccminer and all its variants. Actually the only reason I use Claymore is because it dual mines, if it didn't do that or if I was just mining only Eth I would use ccminer cause that % difference creates a profit change for me, is about 5k a year.

If you look at the pools there are quite a few guys with 200 rigs going at least. If that isn't crazy amounts for one person, what is? To keep up 200 rigs and maintain them, you gotta hire 1 or 2 people. Just to purchase that much hardware you need to be a multi millionaire, their time is more important elsewhere, like their day job. So lets say someone like that is going to hire a programmer to make "special" mining software, money isn't going to last long. Just the infrastructure costs, cooling costs, salaries for people to make sure things are going at top performance ever second of the day. It will eat up the costs of what you are mining.

How can one release software that is 2x faster at mining than current software when the software is bound by the bandwidth of the graphics card? And I just showed you what the actual devs of Eth's blockchain just stated. So can you try to tell me how with only 480ish gb/s that Vega will have can reach 100 mhs? This isn't a software, driver issue, its purely hardware.
 
Last edited:
What does that mean, well pretty much ya need a butt load of bandwidth and memory more than processing power. That is why frequency more cores etc doen't do much for Eth mining if the bandwidth isn't there.
Some of which may be provided by caching mechanisms. The acyclic graphs used by the DAG are trees. So it stands to reason the trunk could be cached or localized access patterns established over time. So even if random, there could exist a temporal access pattern a victim cache naturally discovers if one exists. Haven't seen any details on where all that SRAM went yet. Vega still has more unaccounted cache than P100 and a big L3 that works transparently would make sense.

Vega won't be refreshed that quickly, why would it be, Polaris refresh was one year and did we see anything extra from that? 5% more performance at a cost of 30% more power?
That was with the same memory speeds. No reason significantly more capable HBM2 won't exist at that time. That's also 6 months of driver improvements with a lot of new capabilities and even games that have already announced support for packed math. No guarantee gaming Volta has that because if segmentation.

But nV's architectue doesn't hit those limitations, only AMD's do...... That is because they only have so many geometry units.
There isn't a fixed amount of geometry units, but pipeline elements as the ALUs are doing the lifting. The 4SE arrangement seen more about binning triangles into specific pipelines. A triangle is a single thread in a wave, so AMD could push 4096/clock by the time the vertex shaders start up. They could go larger with some added hardware, but the 4SE part doesn't seem the concern. AMD was hiring new front-end engineers though, so maybe after Navi, unless it's a software issue. FPGA makes more sense there.

Look first off you throught Vega was going to be a 1080ti killer because of all the specs AMD was shouting out for close to a year now, Now that is not going to happen and its obvious. So you are going to tell me, what they just stated about primitive shaders are going to make any difference. If developers don't have that control over them, forget it, its polygon through put will be no better than Polaris for almost all titles. Past and near future, untill developers have access to it. AMD will not be able to do it through drivers as I stated unless initiation and propagation of vertices are done with FP16 there will be no way for primitive shaders to automatically done via drivers.
Still think it'll take town Titan Xp once all the features are used. That part seems rather likely given possible performance gains from some abilities. Mantor explained how primitive shaders would make a difference, although it was limited to saving bandwidth. The culling mechanisms are well established, but with dynamic allocation they could speed it along with FP16. Not expecting huge gains until a dev really goes to town with it, but as I mentioned above, geometry isn't the biggest issue. The pixel shading is the bulk of the work where making z-culling more efficient has big gains. No idea if DSBR had that part enabled yet, but the primitive shaders likely assist. Converting g positions into 8/16bit to hopefully sort more efficiently. Lot of moving pieces that all need to be working and would prefer some tuning once they were.
 
Some of which may be provided by caching mechanisms. The acyclic graphs used by the DAG are trees. So it stands to reason the trunk could be cached or localized access patterns established over time. So even if random, there could exist a temporal access pattern a victim cache naturally discovers if one exists. Haven't seen any details on where all that SRAM went yet. Vega still has more unaccounted cache than P100 and a big L3 that works transparently would make sense.

P100 gets a hash rate of 75mhs ya know that right? But why does it, its got how much bandwidth? 780 GB/s, That is why! Its not the cache that did that, its the bandwidth. DAG files sizes can't keep much in the cache

That was with the same memory speeds. No reason significantly more capable HBM2 won't exist at that time. That's also 6 months of driver improvements with a lot of new capabilities and even games that have already announced support for packed math. No guarantee gaming Volta has that because if segmentation.

Only if Vega is bandwidth bound will know that when it comes out

Volta probably won't have packed math, at least not at full speed, but nV probably can do more then cutting it down by 1/64 the speed if need be. Unlike DP units being cut FP16 units are the same as the FP32 units on Pascal nV slows FP 16 calculation amounts through drivers.

There isn't a fixed amount of geometry units, but pipeline elements as the ALUs are doing the lifting. The 4SE arrangement seen more about binning triangles into specific pipelines. A triangle is a single thread in a wave, so AMD could push 4096/clock by the time the vertex shaders start up. They could go larger with some added hardware, but the 4SE part doesn't seem the concern. AMD was hiring new front-end engineers though, so maybe after Navi, unless it's a software issue. FPGA makes more sense there.

I doubt they would use FPGA's, FPGA's are good for extremely specific tasks, and if you want it for flexibility for programming, that is something they won't use. Its definitely a hardware issue, if it was software it would have been solved or minimized to the maximum extent by now.

There is a fixed amount of Geometry units, Vega has 4 of the them just like Polaris. Primitive shaders don't use the traditional geometry pipeline in that they use the compute units to handle that portion. You have a fixed number of shader units which you mentioned and that is your limit, but if used for geometry processing you have less to do other work, so you are left with a balancing act later on down the pipeline. Either way you have a fixed amount of resources which can't be circumvented.


Still think it'll take town Titan Xp once all the features are used. That part seems rather likely given possible performance gains from some abilities. Mantor explained how primitive shaders would make a difference, although it was limited to saving bandwidth. The culling mechanisms are well established, but with dynamic allocation they could speed it along with FP16. Not expecting huge gains until a dev really goes to town with it, but as I mentioned above, geometry isn't the biggest issue. The pixel shading is the bulk of the work where making z-culling more efficient has big gains. No idea if DSBR had that part enabled yet, but the primitive shaders likely assist. Converting g positions into 8/16bit to hopefully sort more efficiently. Lot of moving pieces that all need to be working and would prefer some tuning once they were.

Just won't happen man, its like releasing a product with half its cores functioning and or drivers crashing all over the place, pretty much a 8500 pro launch, we saw how that card sold, and when they finally got their act together and the 8500 pro could actually beat the GF3ti, the GF4 was out. Doesn't make sense to come out with something only half functional in performance when its just not going to sell. Plus of that theory is even remotely possible the only card they would need to release would be Vega 56, which would be plenty if could go up against the 1080ti cause at its power draw levels it would match Volta, or close to it.

If it was that capable of an architecture, which we will know on Monday, just how bad its going to get crushed, they would have double timed their driver development if it was on the software side.
 
Last edited:
Some of which may be provided by caching mechanisms. The acyclic graphs used by the DAG are trees. So it stands to reason the trunk could be cached or localized access patterns established over time. So even if random, there could exist a temporal access pattern a victim cache naturally discovers if one exists. Haven't seen any details on where all that SRAM went yet. Vega still has more unaccounted cache than P100 and a big L3 that works transparently would make sense.


That was with the same memory speeds. No reason significantly more capable HBM2 won't exist at that time. That's also 6 months of driver improvements with a lot of new capabilities and even games that have already announced support for packed math. No guarantee gaming Volta has that because if segmentation.


There isn't a fixed amount of geometry units, but pipeline elements as the ALUs are doing the lifting. The 4SE arrangement seen more about binning triangles into specific pipelines. A triangle is a single thread in a wave, so AMD could push 4096/clock by the time the vertex shaders start up. They could go larger with some added hardware, but the 4SE part doesn't seem the concern. AMD was hiring new front-end engineers though, so maybe after Navi, unless it's a software issue. FPGA makes more sense there.


Still think it'll take town Titan Xp once all the features are used. That part seems rather likely given possible performance gains from some abilities. Mantor explained how primitive shaders would make a difference, although it was limited to saving bandwidth. The culling mechanisms are well established, but with dynamic allocation they could speed it along with FP16. Not expecting huge gains until a dev really goes to town with it, but as I mentioned above, geometry isn't the biggest issue. The pixel shading is the bulk of the work where making z-culling more efficient has big gains. No idea if DSBR had that part enabled yet, but the primitive shaders likely assist. Converting g positions into 8/16bit to hopefully sort more efficiently. Lot of moving pieces that all need to be working and would prefer some tuning once they were.
I just want you to know I love everything you write and I plan submitting your name for a Hugo award next year.
 
I didn't say it was better. I just said 3d mark isn't a game.

Its not a game but doesn't change the fact that its fairly representative of performance. You can expect swings 10% either way from that figure for actual applications. Tweaktown benchmarks were games specifically that looked better on any AMD hardware I would expect the same from Vega, those applications should be better on Vega over its counterparts. Come on rx 480 was 10% to 20% better in those games over the 1060 6gb. If Vega doesn't have that lead in those games, it will have a tough time keeping up with the 1070 in any other games.
 
If Vega produces a 1080Ti performance card @ $600, I would pre-order.

However, I am skeptical but hopeful.
Ahh, good for you Dorothy :)
Ya that would be so awesome! So awesome that that would be a killer first day buy YO!
Thanks for the heads-up? For a competitive card that sells @ WTF! are you even thinking here?? OMG!
I think I am missing your humor, but when I get it, it will be hilarious????
Uggggghhhhh, NO!
Get some perspective bro. I feel for you, but wow! :)
 
Vega won't be refreshed that quickly, why would it be, Polaris refresh was one year and did we see anything extra from that? 5% more performance at a cost of 30% more power?
Maybe Anarchist means Vega 20? It's supposedly an HPC card though, wonder what they'll do to it... Vega 10x2 is also coming out end of the year, which is quite soon for a mGPU card, out of character for either corporation. So I wonder if we will see a PLX, or IF 500gb/sec link...... usually they're almost a year down the track, for some reason they have stepped this up.

Another thing to note is AMD has been keeping their cards tight to their chest about Navi and future products. They also have leapfrogging design teams now, perhaps they have something planned for 2018 that we don't know about.. this mGPU timing has me really scratching my head. But I'm prepared for a let down lol.

That said the few new games I do play that benefit from mGPU support it, so 10x2 could be a nice hold me over, especially considering the drivers for single gpu should be quite good by then.
 
Probably right on the money there. Wondering if Navi could even do it before Nvidia replaces Volta, hope and pray they have got Navi to at least 4k60-70/Volta and it can scale like TR, then we might even see 4k120 a little quicker. Either way I want to see the competition, AMD might just have another Ryzen on their hands there, they already have 500gb/sec IF links in Vega.



Lol fair point and yeah the 1080 in retrospect was/is a pretty good buy, I would've personally avoided the frontier edition tax but always prefer reference cards where possible...
Need to consider though Navi is going to have some AI/DL cores and that is going to be rather complex from an R&D perspective, especially when one also considers how to access said functionality not just from driver/GPU functionality but also libraries/SDK.
Now consider how long it has taken to get Vega, well Navi is going to be a lot tougher to do as Nvidia has had a lot longer and more engineers committed to AI/DL and it integrates well into their current architecture in terms of those 'separate' cores.

Edit:
You will see Vega20 (more designed towards FP64) way before Navi IMO.
Cheers
 
Last edited:
Well I'm waiting for Kyles review. Then I'll decide. You can always wait for the next best thing, but then you'll have nothing. But with new tech, you should always wait a couple gens until all the standards settle and work nicely.
Not being flippant but you just gave the perfect reason of a GTX1080 rather than Vega *shrug*.
Putting that aside people have waited 15 months relative to the GTX1080, so waiting seems to actually be relevant (in my context anyway, which to reiterate was only about making decisions based on the value comment raised by others).
Cheers
 
Not being flippant but you just gave the perfect reason of a GTX1080 rather than Vega *shrug*.
Putting that aside people have waited 15 months relative to the GTX1080, so waiting seems to actually be relevant (in my context anyway, which to reiterate was only about making decisions based on the value comment raised by others).
Cheers
No I didn't. There are plenty of people who wait like me because their card is good enough, or who are building their first or second system. If everybody had a card then all card sales would be 0. We don't know sh*t until the performance reviews come in. I'm neutral territory.

We still don't know where the 1080 sits in the price/performance ranks. If you want an adaptive monitor then it tilts towards AMD's favor even more IF the original MSRP holds and there is supply.
 
Last edited by a moderator:
so in Canada, im assuming this is the watercooled V64..

why ON EARTH would I want one of these VS a 1080ti. They are the same price.


GV-RXVEGA64X W-8GD-B
Gigabyte VCX GV-RXVEGA64X W-8GD-B Radeon RX VEGA 64 8GB HBM2 2048Bit HDMI 3xDP (AS2222204718)
Usually available in 5-10 days. Back Order
$1,035.08
BUY

Yeah, NCIX GPU pricing is terrible. They even sell the MSI Armor 1080 for $999 CAD. I'm holding out hope that I could snag a card from newegg for msrp tomorrow (or tonight? who knows!)
http://www.ncix.com/detail/msi-geforce-gtx-1080-armor-e1-132869.htm
 
Maybe Anarchist means Vega 20? It's supposedly an HPC card though, wonder what they'll do to it... Vega 10x2 is also coming out end of the year, which is quite soon for a mGPU card, out of character for either corporation. So I wonder if we will see a PLX, or IF 500gb/sec link...... usually they're almost a year down the track, for some reason they have stepped this up.

Another thing to note is AMD has been keeping their cards tight to their chest about Navi and future products. They also have leapfrogging design teams now, perhaps they have something planned for 2018 that we don't know about.. this mGPU timing has me really scratching my head. But I'm prepared for a let down lol.

That said the few new games I do play that benefit from mGPU support it, so 10x2 could be a nice hold me over, especially considering the drivers for single gpu should be quite good by then.


Vega20 isn't a refresh, and it won't be for gaming cards either.

About leapfrogging design teams, they just hired some new folk, they won't be leapfrogging anyone by 2018. Learning curve alone will slow down the team 1 year and they will already not be working on Navi. So even the gen after Navi, might not be the thing they are working on ;). It takes 3 years for a rehash of current architecture, 4 to 5 years for a new architecture. That puts it 2020-2021.


This is why AMD couldn't shift their GPU release schedules after Maxwell came out, it was already too late to do anything. What ever they were working on is what we are getting now. Navi might be in the same boat in this regard we won't know till we get more info of course but timings for it just doesn't fit with the possibility of a major uplift at least to the capabilities to match Volta. AMD started with the 2nd team for GPU design guessing around 6 months to a year ago. and that is why 2020 looks likely. And Navi would have already been too far underway to get new people on or have any major changes.

Granted Navi will be much different that GCN, but how much more competitive will be depends on what nV is doing at the same time. AMD can't be looking at 50% uplift in performance at the same wattage. They need to be looking at 100% uplift at their current wattage or 40% uplift in performance with 40% drop in power consumption over their current products. That is not easy to do. Its a monumental task something we have never seen done before in the history of GPU's gen to gen.

And straight from AMD Polaris was the largest perf/watt gains they have EVERY gotten from gen to gen and that wasn't impressive when we saw what Maxwell to Pascal did.
 
Last edited:
Maybe Anarchist means Vega 20? It's supposedly an HPC card though, wonder what they'll do to it... Vega 10x2 is also coming out end of the year, which is quite soon for a mGPU card, out of character for either corporation. So I wonder if we will see a PLX, or IF 500gb/sec link...... usually they're almost a year down the track, for some reason they have stepped this up.

.

The dual vega card its made by asus not directly by AMD so I would left any fancy tech, so PLX controller, it will probably be just two highly binned moderately clocked GPUs in a single PCB with AIO cooling.
 
The dual vega card its made by asus not directly by AMD so I would left any fancy tech, so PLX controller, it will probably be just two highly binned moderately clocked GPUs in a single PCB with AIO cooling.


Lets see if they actually make it though...... They might make a prototype but actually selling the damn thing in quantity, don't see that happening right now.
 
20785890_1431708033549890_7432646605556830438_o.jpg
 
Ahh, good for you Dorothy :)
Ya that would be so awesome! So awesome that that would be a killer first day buy YO!
Thanks for the heads-up? For a competitive card that sells @ WTF! are you even thinking here?? OMG!
I think I am missing your humor, but when I get it, it will be hilarious????
Uggggghhhhh, NO!
Get some perspective bro. I feel for you, but wow! :)
Heads-up? Do you look at dates? Troll much?
 
I doubt they would use FPGA's, FPGA's are good for extremely specific tasks, and if you want it for flexibility for programming, that is something they won't use. Its definitely a hardware issue, if it was software it would have been solved or minimized to the maximum extent by now.
Would be more flexible when not dealing with geometry and able to accommodate a varying number of SE's as multiple chips would each present as one.

There is a fixed amount of Geometry units, Vega has 4 of the them just like Polaris. Primitive shaders don't use the traditional geometry pipeline in that they use the compute units to handle that portion.
They also have an instruction included in each scalar or vector ALU (need to double check) now capable of binning. That's an awful lot of geometry binning for 4 tri/clock. The geometry engines rely heavily on the interpolators in LDS anyways, so each CU could handle geometry. The bigger factor could be better bins generating higher coverage from those triangles.

Just won't happen man, its like releasing a product with half its cores functioning and or drivers crashing all over the place, pretty much a 8500 pro launch, we saw how that card sold, and when they finally got their act together and the 8500 pro could actually beat the GF3ti, the GF4 was out. Doesn't make sense to come out with something only half functional in performance when its just not going to sell. Plus of that theory is even remotely possible the only card they would need to release would be Vega 56, which would be plenty if could go up against the 1080ti cause at its power draw levels it would match Volta, or close to it.
780ti seems an apt example. No bindless, no packed math, and possibly not a primitive shader mechanism if a dev goes crazy there.

The biggest difference may come if Vega ends up being a TBDR architecture like PowerVR. Often characterized by large caches and lower memory bandwidth as they're far more efficient. Vega has 45MB SRAM (even V100 is only 30ish, P100 half that) and seemingly low bandwidth. Also makes sense for low powered APUs and Apple who used to use PowerVR, but transitioned to AMD and maybe their own design.

Maybe Anarchist means Vega 20? It's supposedly an HPC card though, wonder what they'll do to it... Vega 10x2 is also coming out end of the year, which is quite soon for a mGPU card, out of character for either corporation. So I wonder if we will see a PLX, or IF 500gb/sec link...... usually they're almost a year down the track, for some reason they have stepped this up.
Vega20 is a dedicated compute thing. I'm talking 480 to 580 refresh. That was same memory with faster core, but Vega could be a different combination. AMD had a dual GPU on slides, but may be more of a density play for compute. No reason it wouldn't work for gaming to move volume though.

Need to consider though Navi is going to have some AI/DL cores and that is going to be rather complex from an R&D perspective, especially when one also considers how to access said functionality not just from driver/GPU functionality but also libraries/SDK.
Now consider how long it has taken to get Vega, well Navi is going to be a lot tougher to do as Nvidia has had a lot longer and more engineers committed to AI/DL and it integrates well into their current architecture in terms of those 'separate' cores.
I doubt they will be separate, just add some adders for mixed precision, scheduling ability for wave ops if they don't come with SM6, and swizzle patterns for different matrix dimensions. Not sure if there are any more recent AI/DL instructions, but that aren't complicated. Few modifications to a SIMD and tensor core.

As for API support, I'd think wave once even in graphics could expose it. Just that graphics doesn't have nearly that large of matrices. In compute they'd be similar to AVX512 instructions, which is probably the route AMD goes.

It takes 3 years for a rehash of current architecture, 4 to 5 years for a new architecture. That puts it 2020-2021.
With chiplets and the "matter of hours" to validate changes with Infinity it's possible if they focus on one specific system. I'd agree it's further off though.
 
The biggest difference may come if Vega ends up being a TBDR architecture like PowerVR. Often characterized by large caches and lower memory bandwidth as they're far more efficient. Vega has 45MB SRAM (even V100 is only 30ish, P100 half that) and seemingly low bandwidth. Also makes sense for low powered APUs and Apple who used to use PowerVR, but transitioned to AMD and maybe their own design.
A few things Apple uses PowerVR in their mobile devices and still do. They only recently announced they where moving away from PowerVR so I do not expect a custom Apple mobile GPU till 2019. Apple has switched back and forth between Radeon and NVIDIA gpu in their desktop and laptops for a long time and considering they have recently hired people to work on NVIDIA gpu implementation it is possible in the next year or two they could be switching back to nvidia. So Apple never went from PowerVR to AMD.
 
Would be more flexible when not dealing with geometry and able to accommodate a varying number of SE's as multiple chips would each present as one.

Yeah and when will this magical technology happen, not in Vega's life time.


They also have an instruction included in each scalar or vector ALU (need to double check) now capable of binning. That's an awful lot of geometry binning for 4 tri/clock. The geometry engines rely heavily on the interpolators in LDS anyways, so each CU could handle geometry. The bigger factor could be better bins generating higher coverage from those triangles.

Depends, doesn't work in a one way fashion.


780ti seems an apt example. No bindless, no packed math, and possibly not a primitive shader mechanism if a dev goes crazy there.

Who gives a shit about the 7xx line anymore, its a 2 gen old tech, soon to be 3 gen old......

The biggest difference may come if Vega ends up being a TBDR architecture like PowerVR. Often characterized by large caches and lower memory bandwidth as they're far more efficient. Vega has 45MB SRAM (even V100 is only 30ish, P100 half that) and seemingly low bandwidth. Also makes sense for low powered APUs and Apple who used to use PowerVR, but transitioned to AMD and maybe their own design.



Vega20 is a dedicated compute thing. I'm talking 480 to 580 refresh. That was same memory with faster core, but Vega could be a different combination. AMD had a dual GPU on slides, but may be more of a density play for compute. No reason it wouldn't work for gaming to move volume though.


I doubt they will be separate, just add some adders for mixed precision, scheduling ability for wave ops if they don't come with SM6, and swizzle patterns for different matrix dimensions. Not sure if there are any more recent AI/DL instructions, but that aren't complicated. Few modifications to a SIMD and tensor core.

As for API support, I'd think wave once even in graphics could expose it. Just that graphics doesn't have nearly that large of matrices. In compute they'd be similar to AVX512 instructions, which is probably the route AMD goes.


With chiplets and the "matter of hours" to validate changes with Infinity it's possible if they focus on one specific system. I'd agree it's further off though.


All of this is what if, chiplets, etc, sorry tech hasn't been done for any of that, its a magical day in the neighborhood when AMD is no where to release things of that nature.

We went down this road before and for Vega everything you talked about with multiple GPU dies on a MCM has not shown up yet. And it will not show up on consumer grade products in any way or form for many years to come.

Guess what Larrabee had rumors about scale-ability of cores and multiple dies too.. That didn't pan out did it?

Please stick with what is routed in reality and the conversation can have some meaning, rumors that are actually rumors need some truth behind them. NOT WHAT IF this or that. Can you already see you chiplet design failing with Vega? Why do you think Asus is making a dual GPU design for Vega not a chiplet design?

You think they will be able to get around the latency/ bandwidth/cache issues with Navi to create a chiplet design? Everything that an MCM design with multiple GPU's will need a chiplet design will ALSO need to varying degrees but the needs will still be there. So to accomplish one the other will be accomplishing, if one can't be accomplished the other can't be done either.

You think Intel a company that has multiple billions of dollars more than AMD wouldn't be able to figure it out in the same course of time AMD supposedly by you posts can figure it out?
 
Last edited:
Regarding Intel and MCM designs. Isn't that exactly what EMIB is ?
https://www.intel.com/content/www/us/en/foundry/emib.html
https://www.intel.com/content/www/us/en/foundry/emib--an-interview-with-babak-sabi.html


I've seen a few rumours saying it will debut with Icelake for Xeons and possibly for consumer too. Its already in use in some FPGA's they fab for Altera(I think thats the name)


http://i.imgur.com/HduLC4U.png
http://i.imgur.com/JNyhw3b.png


Yes and no, CPU's this will work fine up to a certain point, similar to what we see in Ryzen, then we saw the pitfalls it can have, specially on the serverside with Epyc, this wasn't by happenstance Intel and AMD came out with similar things around the same time (although AMD came out with in sooner in products by a few months). That means the designs on both companies were there for years before. So why did it happen now? It happened now for both of them because there are physical limitations that were removed and cost benefit ratio now makes sense for BOTH companies.

Now having said that;

still have the latency issue to get around ;) GPU's that latency will kill any type of capabilities to have transparency in multiple dies.

How are they going to get around this, they just need more bandwidth and cache per die. To do that in a consumer GPU product the same cost/benefit ratio has be realized. Until that point, it will never happen.

Just using an MCM is costly enough to remove it from a consumer line up. This is why nV has not gone with HBM or 2, the memory is expensive, the interposer is expensive and the cost of manufacturing and setting up that pipeline is expensive. That expense must be reflected either on margins or the cost of the product. Its not too bad on the CPU side, cause the interconnects don't need to be anywhere near as much throughput as a GPU needs are and the memory is still off the MCM in regular old DIMMs.
 
Last edited:
Yes and no, CPU's this will work fine up to a certain point, similar to what we see in Ryzen, then we saw the pitfalls it can have, specially on the serverside with Epyc, this wasn't by happenstance Intel and AMD came out with similar things around the same time (although AMD came out with in sooner in products by a few months). That means the designs on both companies were there for years before. So why did it happen now? It happened now for both of them because there are physical limitations that were removed and cost benefit ratio now makes sense for BOTH companies.

Now having said that;

still have the latency issue to get around ;) GPU's that latency will kill any type of capabilities to have transparency in multiple dies.

How are they going to get around this, they just need more bandwidth and cache per die. To do that in a consumer GPU product the same cost/benefit ratio has be realized. Until that point, it will never happen.

Just using an MCM is costly enough to remove it from a consumer line up. This is why nV has not gone with HBM or 2, the memory is expensive, the interposer is expensive and the cost of manufacturing and setting up that pipeline is expensive. That expense must be reflected either on margins or the cost of the product. Its not too bad on the CPU side, cause the interconnects don't need to be anywhere near as much throughput as a GPU needs are and the memory is still off the MCM in regular old DIMMs.


Thanks for the explanation. So the three biggest problems with MCM designs from what I understand is latency, bandwidth and for GPUs transparency between the dies to they can act as one and not like a SLI or CF solution. Interesting. I find the concept of MCM designs fascinating as Moore's Law is faltering this is a way to get around that looming problem.
 
Well, tomorrow is the big day, the era of "wait for Vega" is over, we are on the brink of a new dawn; the bright sun of "wait for Navi" awaits.

Somewhere I got lost in dusk
Chasing down the dawn;
Hopelessly I entered night,
Hoping for some holy light
Hiding behind curtains drawn -
Sunlight turned to dust!

Roaming through the silver dust
Settling down at dusk,
I felt my fascinations drawn
To thoughts of an alien dawn,
As stars glowed unholy light
In the deep abyss of night...

In the ever-reach of night,
Darkness, fear and dust
Drowned out the last reach of light
That still lingered on from dusk,
But, no! From my lobe of dawn
Hope could still be drawn!

Inspiration could still be drawn,
Even in darkest night,
As long as I still believe in dawn
And dancing motes of morning dust,
For, even in the ides of dusk
Shines some glinting light!

And oh! Such glorious light
In my eyes are drawn:
In the twilight hours of dusk -
In the moonlit gaze of night -
In the sparkle of the dust
Singing joy to dawn!

Yes, I still believe that dawn
Will greet me with light,
Even though I swallowed dust
With every breath I have drawn,
I can dream an end to night
As there's an end to dusk!

And yes, dusk and dust have drawn
From night a light - forward unto dawn!

https://allpoetry.com/poem/11952933-Forward-Unto-Dawn-by-Rene-Alexander

This is not the greatest poem ever written, but Vega isn't really shaping up to the greatest GPU ever developed either.

Seriously though I'm interested to see how Vega 56 performs and if it is actually available at MSRP in the next few months, Vega 64 looks DoA to me but I'm willing to be surprised
 
Video-based video card reviews suck anyway.

Uh.. Not exactly. If I want numbers and want them quick i'll watch a short video showing various games with Mins/Max/Avgs showing while being compared against other cards. BUT, if I want an in depth technical review i'll read PCPER and AnandTech.
 
A LTT review video is light on detail but gives a decent summary. When PCPer does their review, I get irritated by the details, but personally, I prefer reading it than hearing someone talk about it. GamerNexus usually includes a video and a longer written article that I think works best.
 
Back
Top