Vega Rumors

Discussion in 'AMD Flavor' started by grtitan, May 10, 2017.

  1. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005
    The problem with that is people like Claymore take 2% of your mining profits, anyone that can make mining software run that much more would make a KILLING going on and taking from pools, the biggest pools have close to 20000 workers, you know what that means, 2% of those 20000 workers 6 cards per worker, is how much money?

    Yeah shopping around for private miners, they will pay you a flat rate + a small % maybe if you are that good. Guess what when we are talking about hundreds of thousands of graphics cards being used in pools that 2% will blow away anything one guy can give.

    This is why solo mining is not recommended because the pool luck will get you all the time, you need to have a butt load of rigs to ensure pool luck for yourself, and even then it still hurts when you have a few hours of bad luck. It can go on for a day or two as well, not just hours.

    Simple economics.

    I think I know a bit about programming to know what can be done and what can't be done in reasonable aspects. Whom ever pulled that 100 mhs number pulled it out of their ass, its just not possible to do that much with one card right now.

    And if you don't believe me about Eth being bandwidth bound, just look up Dagger Hashimoto, that is the algorithm Eth's blockchain is based off of.

    They don't use multiple algorithms per coin, its one algorithm and that is it (some coins are combinations of previous algorithms, but the mining software must use all of those algorithms as specified by the coin otherwise it can't mine that coin).

    here is the link about the Algo for Eth

    These are the guys that made the damn thing they really should know what they are talking about right?

    https://github.com/ethereum/wiki/wiki/Mining

    What does that mean, well pretty much ya need a butt load of bandwidth and memory more than processing power. That is why frequency more cores etc doen't do much for Eth mining if the bandwidth isn't there.
     
    Last edited: Aug 12, 2017
  2. Anarchist4000

    Anarchist4000 [H]ard|Gawd

    Messages:
    1,554
    Joined:
    Jun 10, 2001
    At that time you're also looking at a refresh of Vega. With the typical 3-5 year upgrade cycle for most of the market, 6 months is nothing.
     
  3. Anarchist4000

    Anarchist4000 [H]ard|Gawd

    Messages:
    1,554
    Joined:
    Jun 10, 2001
    You act like 4 tri/cycle is actually a problem. That's 4/cycle before you start tessellation and is based on binning geometry into four quadrants in screen space. For almost all titles, excluding CAD, that is more than sufficient. They will offer tangible gains in current games as Mantor explained. Just not in peak geometry rate without explicit programming. That wasn't a possibility before and something devs have requested.

    In the case of VR and other upcoming titles with TBDR, the issue is even less as the scene gets reprojected. Best way to efficiently push 240Hz and even non-VR titles will likely adopt similar tech. Techniques like async spacewarp with improvements so devs can do more with less.

    No, it hasn't been released yet. You get an advance copy? All I've seen was the ISA paper.

    Beyond the 40ish new instructions that we're exposed? That's not necessarily all of them either, just what's released.

    Primitive shaders get rid of the black box and allow a programmable pipeline. Beyond what used to be driver optimizations, they would be limited by the old pipeline structure without game specific testing. A lot is possible without those limitations and that's what devs have been presenting papers on.

    It doesn't work because it just works? The drivers have always done that work, only difference being they are likely culling a bit more efficiently. The attribute fetching was the big one Mantor mentioned that should work with existing games. Would likely entail their intelligent workgroup distribution as well.

    Eventually primitive shaders will have dynamic memory and scheduling capabilities. They seem to exist to control AMDs binning process in addition to optimizing the culling. Passing hints from prior frames to better predict distributions.
     
  4. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005

    Vega won't be refreshed that quickly, why would it be, Polaris refresh was one year and did we see anything extra from that? 5% more performance at a cost of 30% more power?
     
    Armenius, Presbytier and Ocellaris like this.
  5. godihatework

    godihatework [H]Lite

    Messages:
    88
    Joined:
    Dec 18, 2002
    Moving goalposts.gif

    I mean, Jesus Christ dude.
     
    Reality and AlexFromAU like this.
  6. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005
    Its good for yesterdays games not tomorrows games. Effectively double the possible triangle counts of games but the thing is we already see Fiji hitting today's game limits on possible polygon counts, look at the games where Polaris creeps up to Fiji's frame rates.

    Yeah not much difference, and if you looked at the ISA paper, you can see the pipeline is essenatially the same.

    Almost all of them are for FP16. hence why I stated not much difference other than that in the shader array.

    But nV's architecture doesn't hit those limitations, only AMD's do...... That is because they only have so many geometry units.

    Look first off you thought Vega was going to be a 1080ti killer because of all the specs AMD was shouting out for close to a year now, Now that is not going to happen and its obvious. So you are going to tell me, what they just stated about primitive shaders are going to make any difference. If developers don't have that control over them, forget it, its polygon through put will be no better than Polaris for almost all titles. Past and near future, untill developers have access to it. AMD will not be able to do it through drivers as I stated unless initiation and propagation of vertices are done with FP16 there will be no way for primitive shaders to automatically done via drivers.

    I do agree on that point it should, but it also depends on how its implemented on AMD hardware don't know that yet.
     
    Armenius likes this.
  7. cpuspeed

    cpuspeed [H]Lite

    Messages:
    96
    Joined:
    Oct 7, 2008
    Maybe not so simple economics.

    If someone released a new ETH miner software that did say 10x current rates and only asked for a 4% fee. You think he would make more than Claymore does now, right? However, he wouldn't make anymore than Claymore does now because difficulty rates would increase which would adjust the rewards rates down proportionally. He would probably make less, probably need to charge 20% fee to break even with Claymore. Only advantage is to keep it to yourself, relatively speaking.

    Also any software with increase rates would get reverse engineered and copied which I think has happened between Claymore, ethminer and sgminer. They all have about the same rates but Claymore's easier to setup, especially for dual mining.

    Also, you can cheat the Claymore devfee and have it sent to yourself, though he makes a lot to be sure.
     
  8. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005

    mining software is open sourced man, Claymore isn't the only one that can do it, ccminer is damn close to Claymore now. Claymore was the first person to do a cuda based ETH miner that can dual mine and that is why his has been so popular. Don't tell me there are programmers out there that are so much better then the over all community of programmers that work on ccminer and all its variants. Actually the only reason I use Claymore is because it dual mines, if it didn't do that or if I was just mining only Eth I would use ccminer cause that % difference creates a profit change for me, is about 5k a year.

    If you look at the pools there are quite a few guys with 200 rigs going at least. If that isn't crazy amounts for one person, what is? To keep up 200 rigs and maintain them, you gotta hire 1 or 2 people. Just to purchase that much hardware you need to be a multi millionaire, their time is more important elsewhere, like their day job. So lets say someone like that is going to hire a programmer to make "special" mining software, money isn't going to last long. Just the infrastructure costs, cooling costs, salaries for people to make sure things are going at top performance ever second of the day. It will eat up the costs of what you are mining.

    How can one release software that is 2x faster at mining than current software when the software is bound by the bandwidth of the graphics card? And I just showed you what the actual devs of Eth's blockchain just stated. So can you try to tell me how with only 480ish gb/s that Vega will have can reach 100 mhs? This isn't a software, driver issue, its purely hardware.
     
    Last edited: Aug 12, 2017
  9. cpuspeed

    cpuspeed [H]Lite

    Messages:
    96
    Joined:
    Oct 7, 2008
  10. Anarchist4000

    Anarchist4000 [H]ard|Gawd

    Messages:
    1,554
    Joined:
    Jun 10, 2001
    Some of which may be provided by caching mechanisms. The acyclic graphs used by the DAG are trees. So it stands to reason the trunk could be cached or localized access patterns established over time. So even if random, there could exist a temporal access pattern a victim cache naturally discovers if one exists. Haven't seen any details on where all that SRAM went yet. Vega still has more unaccounted cache than P100 and a big L3 that works transparently would make sense.

    That was with the same memory speeds. No reason significantly more capable HBM2 won't exist at that time. That's also 6 months of driver improvements with a lot of new capabilities and even games that have already announced support for packed math. No guarantee gaming Volta has that because if segmentation.

    There isn't a fixed amount of geometry units, but pipeline elements as the ALUs are doing the lifting. The 4SE arrangement seen more about binning triangles into specific pipelines. A triangle is a single thread in a wave, so AMD could push 4096/clock by the time the vertex shaders start up. They could go larger with some added hardware, but the 4SE part doesn't seem the concern. AMD was hiring new front-end engineers though, so maybe after Navi, unless it's a software issue. FPGA makes more sense there.

    Still think it'll take town Titan Xp once all the features are used. That part seems rather likely given possible performance gains from some abilities. Mantor explained how primitive shaders would make a difference, although it was limited to saving bandwidth. The culling mechanisms are well established, but with dynamic allocation they could speed it along with FP16. Not expecting huge gains until a dev really goes to town with it, but as I mentioned above, geometry isn't the biggest issue. The pixel shading is the bulk of the work where making z-culling more efficient has big gains. No idea if DSBR had that part enabled yet, but the primitive shaders likely assist. Converting g positions into 8/16bit to hopefully sort more efficiently. Lot of moving pieces that all need to be working and would prefer some tuning once they were.
     
  11. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005
    P100 gets a hash rate of 75mhs ya know that right? But why does it, its got how much bandwidth? 780 GB/s, That is why! Its not the cache that did that, its the bandwidth. DAG files sizes can't keep much in the cache

    Only if Vega is bandwidth bound will know that when it comes out

    Volta probably won't have packed math, at least not at full speed, but nV probably can do more then cutting it down by 1/64 the speed if need be. Unlike DP units being cut FP16 units are the same as the FP32 units on Pascal nV slows FP 16 calculation amounts through drivers.

    I doubt they would use FPGA's, FPGA's are good for extremely specific tasks, and if you want it for flexibility for programming, that is something they won't use. Its definitely a hardware issue, if it was software it would have been solved or minimized to the maximum extent by now.

    There is a fixed amount of Geometry units, Vega has 4 of the them just like Polaris. Primitive shaders don't use the traditional geometry pipeline in that they use the compute units to handle that portion. You have a fixed number of shader units which you mentioned and that is your limit, but if used for geometry processing you have less to do other work, so you are left with a balancing act later on down the pipeline. Either way you have a fixed amount of resources which can't be circumvented.


    Just won't happen man, its like releasing a product with half its cores functioning and or drivers crashing all over the place, pretty much a 8500 pro launch, we saw how that card sold, and when they finally got their act together and the 8500 pro could actually beat the GF3ti, the GF4 was out. Doesn't make sense to come out with something only half functional in performance when its just not going to sell. Plus of that theory is even remotely possible the only card they would need to release would be Vega 56, which would be plenty if could go up against the 1080ti cause at its power draw levels it would match Volta, or close to it.

    If it was that capable of an architecture, which we will know on Monday, just how bad its going to get crushed, they would have double timed their driver development if it was on the software side.
     
    Last edited: Aug 12, 2017
    Armenius likes this.
  12. Presbytier

    Presbytier Gawd

    Messages:
    943
    Joined:
    Jun 21, 2016
    Sure, I'm a high refresh rate gamer myself, but I recognize I'm a niche, so is VR and 4K gaming.
     
  13. Presbytier

    Presbytier Gawd

    Messages:
    943
    Joined:
    Jun 21, 2016
    I just want you to know I love everything you write and I plan submitting your name for a Hugo award next year.
     
  14. Boil

    Boil Gawd

    Messages:
    829
    Joined:
    Sep 19, 2015
    Oh man, that right there's just plain funny...!
     
  15. Ocellaris

    Ocellaris Ginger Ale, an alcoholic's best friend.

    Messages:
    17,836
    Joined:
    Jan 1, 2008
    Don't do it, his speech would never end.
     
    Armenius and razor1 like this.
  16. NKD

    NKD [H]ardness Supreme

    Messages:
    5,784
    Joined:
    Aug 26, 2007
    I didn't say it was better. I just said 3d mark isn't a game.
     
  17. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005
    Its not a game but doesn't change the fact that its fairly representative of performance. You can expect swings 10% either way from that figure for actual applications. Tweaktown benchmarks were games specifically that looked better on any AMD hardware I would expect the same from Vega, those applications should be better on Vega over its counterparts. Come on rx 480 was 10% to 20% better in those games over the 1060 6gb. If Vega doesn't have that lead in those games, it will have a tough time keeping up with the 1070 in any other games.
     
    Armenius and AlexFromAU like this.
  18. funkydmunky

    funkydmunky [H]ard|Gawd

    Messages:
    1,632
    Joined:
    Aug 28, 2008
    Ahh, good for you Dorothy :)
    Ya that would be so awesome! So awesome that that would be a killer first day buy YO!
    Thanks for the heads-up? For a competitive card that sells @ WTF! are you even thinking here?? OMG!
    I think I am missing your humor, but when I get it, it will be hilarious????
    Uggggghhhhh, NO!
    Get some perspective bro. I feel for you, but wow! :)
     
  19. N4CR

    N4CR [H]ard|Gawd

    Messages:
    1,804
    Joined:
    Oct 17, 2011
    Maybe Anarchist means Vega 20? It's supposedly an HPC card though, wonder what they'll do to it... Vega 10x2 is also coming out end of the year, which is quite soon for a mGPU card, out of character for either corporation. So I wonder if we will see a PLX, or IF 500gb/sec link...... usually they're almost a year down the track, for some reason they have stepped this up.

    Another thing to note is AMD has been keeping their cards tight to their chest about Navi and future products. They also have leapfrogging design teams now, perhaps they have something planned for 2018 that we don't know about.. this mGPU timing has me really scratching my head. But I'm prepared for a let down lol.

    That said the few new games I do play that benefit from mGPU support it, so 10x2 could be a nice hold me over, especially considering the drivers for single gpu should be quite good by then.
     
  20. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    Need to consider though Navi is going to have some AI/DL cores and that is going to be rather complex from an R&D perspective, especially when one also considers how to access said functionality not just from driver/GPU functionality but also libraries/SDK.
    Now consider how long it has taken to get Vega, well Navi is going to be a lot tougher to do as Nvidia has had a lot longer and more engineers committed to AI/DL and it integrates well into their current architecture in terms of those 'separate' cores.

    Edit:
    You will see Vega20 (more designed towards FP64) way before Navi IMO.
    Cheers
     
    Last edited: Aug 13, 2017
  21. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    Not being flippant but you just gave the perfect reason of a GTX1080 rather than Vega *shrug*.
    Putting that aside people have waited 15 months relative to the GTX1080, so waiting seems to actually be relevant (in my context anyway, which to reiterate was only about making decisions based on the value comment raised by others).
    Cheers
     
    Armenius and GoldenTiger like this.
  22. DigitalGriffin

    DigitalGriffin 2[H]4U

    Messages:
    2,949
    Joined:
    Oct 14, 2004
    No I didn't. There are plenty of people who wait like me because their card is good enough, or who are building their first or second system. If everybody had a card then all card sales would be 0. We don't know sh*t until the performance reviews come in. I'm neutral territory.

    We still don't know where the 1080 sits in the price/performance ranks. If you want an adaptive monitor then it tilts towards AMD's favor even more IF the original MSRP holds and there is supply.
     
    Last edited: Aug 13, 2017
  23. KazeoHin

    KazeoHin [H]ardness Supreme

    Messages:
    6,415
    Joined:
    Sep 7, 2011
    If AMD says their card will beat a 1080, then it will trade blows with a 1080.

    If AMD says their card will trade blows wth a 1080, then it will certainly be outclassed by a 1080, and trade blows with the 1070.
     
    Armenius, kalston, Reality and 4 others like this.
  24. Drewis

    Drewis Limp Gawd

    Messages:
    420
    Joined:
    Jul 25, 2006
  25. SeymourGore

    SeymourGore [H]ard|Gawd

    Messages:
    2,042
    Joined:
    Dec 12, 2008
    Yeah, NCIX GPU pricing is terrible. They even sell the MSI Armor 1080 for $999 CAD. I'm holding out hope that I could snag a card from newegg for msrp tomorrow (or tonight? who knows!)
    http://www.ncix.com/detail/msi-geforce-gtx-1080-armor-e1-132869.htm
     
    Armenius likes this.
  26. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005

    Vega20 isn't a refresh, and it won't be for gaming cards either.

    About leapfrogging design teams, they just hired some new folk, they won't be leapfrogging anyone by 2018. Learning curve alone will slow down the team 1 year and they will already not be working on Navi. So even the gen after Navi, might not be the thing they are working on ;). It takes 3 years for a rehash of current architecture, 4 to 5 years for a new architecture. That puts it 2020-2021.


    This is why AMD couldn't shift their GPU release schedules after Maxwell came out, it was already too late to do anything. What ever they were working on is what we are getting now. Navi might be in the same boat in this regard we won't know till we get more info of course but timings for it just doesn't fit with the possibility of a major uplift at least to the capabilities to match Volta. AMD started with the 2nd team for GPU design guessing around 6 months to a year ago. and that is why 2020 looks likely. And Navi would have already been too far underway to get new people on or have any major changes.

    Granted Navi will be much different that GCN, but how much more competitive will be depends on what nV is doing at the same time. AMD can't be looking at 50% uplift in performance at the same wattage. They need to be looking at 100% uplift at their current wattage or 40% uplift in performance with 40% drop in power consumption over their current products. That is not easy to do. Its a monumental task something we have never seen done before in the history of GPU's gen to gen.

    And straight from AMD Polaris was the largest perf/watt gains they have EVERY gotten from gen to gen and that wasn't impressive when we saw what Maxwell to Pascal did.
     
    Last edited: Aug 13, 2017
    Armenius likes this.
  27. Araxie

    Araxie [H]ardness Supreme

    Messages:
    5,548
    Joined:
    Feb 11, 2013
    The dual vega card its made by asus not directly by AMD so I would left any fancy tech, so PLX controller, it will probably be just two highly binned moderately clocked GPUs in a single PCB with AIO cooling.
     
    razor1 likes this.
  28. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005

    Lets see if they actually make it though...... They might make a prototype but actually selling the damn thing in quantity, don't see that happening right now.
     
  29. cageymaru

    cageymaru [H]ard|News

    Messages:
    15,737
    Joined:
    Apr 10, 2003
  30. doz

    doz [H]ardness Supreme

    Messages:
    4,914
    Joined:
    Jun 16, 2009
    Heads-up? Do you look at dates? Troll much?
     
  31. Anarchist4000

    Anarchist4000 [H]ard|Gawd

    Messages:
    1,554
    Joined:
    Jun 10, 2001
    Would be more flexible when not dealing with geometry and able to accommodate a varying number of SE's as multiple chips would each present as one.

    They also have an instruction included in each scalar or vector ALU (need to double check) now capable of binning. That's an awful lot of geometry binning for 4 tri/clock. The geometry engines rely heavily on the interpolators in LDS anyways, so each CU could handle geometry. The bigger factor could be better bins generating higher coverage from those triangles.

    780ti seems an apt example. No bindless, no packed math, and possibly not a primitive shader mechanism if a dev goes crazy there.

    The biggest difference may come if Vega ends up being a TBDR architecture like PowerVR. Often characterized by large caches and lower memory bandwidth as they're far more efficient. Vega has 45MB SRAM (even V100 is only 30ish, P100 half that) and seemingly low bandwidth. Also makes sense for low powered APUs and Apple who used to use PowerVR, but transitioned to AMD and maybe their own design.

    Vega20 is a dedicated compute thing. I'm talking 480 to 580 refresh. That was same memory with faster core, but Vega could be a different combination. AMD had a dual GPU on slides, but may be more of a density play for compute. No reason it wouldn't work for gaming to move volume though.

    I doubt they will be separate, just add some adders for mixed precision, scheduling ability for wave ops if they don't come with SM6, and swizzle patterns for different matrix dimensions. Not sure if there are any more recent AI/DL instructions, but that aren't complicated. Few modifications to a SIMD and tensor core.

    As for API support, I'd think wave once even in graphics could expose it. Just that graphics doesn't have nearly that large of matrices. In compute they'd be similar to AVX512 instructions, which is probably the route AMD goes.

    With chiplets and the "matter of hours" to validate changes with Infinity it's possible if they focus on one specific system. I'd agree it's further off though.
     
  32. Dayman

    Dayman [H]Lite

    Messages:
    70
    Joined:
    Jul 12, 2017
  33. Presbytier

    Presbytier Gawd

    Messages:
    943
    Joined:
    Jun 21, 2016
    A few things Apple uses PowerVR in their mobile devices and still do. They only recently announced they where moving away from PowerVR so I do not expect a custom Apple mobile GPU till 2019. Apple has switched back and forth between Radeon and NVIDIA gpu in their desktop and laptops for a long time and considering they have recently hired people to work on NVIDIA gpu implementation it is possible in the next year or two they could be switching back to nvidia. So Apple never went from PowerVR to AMD.
     
    razor1 likes this.
  34. ecmaster76

    ecmaster76 Gawd

    Messages:
    913
    Joined:
    Feb 6, 2007
    that glass half empty or half full? :D
     
    c3k, jologskyblues, tviceman and 4 others like this.
  35. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005
    Yeah and when will this magical technology happen, not in Vega's life time.


    Depends, doesn't work in a one way fashion.


    Who gives a shit about the 7xx line anymore, its a 2 gen old tech, soon to be 3 gen old......


    All of this is what if, chiplets, etc, sorry tech hasn't been done for any of that, its a magical day in the neighborhood when AMD is no where to release things of that nature.

    We went down this road before and for Vega everything you talked about with multiple GPU dies on a MCM has not shown up yet. And it will not show up on consumer grade products in any way or form for many years to come.

    Guess what Larrabee had rumors about scale-ability of cores and multiple dies too.. That didn't pan out did it?

    Please stick with what is routed in reality and the conversation can have some meaning, rumors that are actually rumors need some truth behind them. NOT WHAT IF this or that. Can you already see you chiplet design failing with Vega? Why do you think Asus is making a dual GPU design for Vega not a chiplet design?

    You think they will be able to get around the latency/ bandwidth/cache issues with Navi to create a chiplet design? Everything that an MCM design with multiple GPU's will need a chiplet design will ALSO need to varying degrees but the needs will still be there. So to accomplish one the other will be accomplishing, if one can't be accomplished the other can't be done either.

    You think Intel a company that has multiple billions of dollars more than AMD wouldn't be able to figure it out in the same course of time AMD supposedly by you posts can figure it out?
     
    Last edited: Aug 13, 2017
  36. Dayman

    Dayman [H]Lite

    Messages:
    70
    Joined:
    Jul 12, 2017
  37. razor1

    razor1 [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 14, 2005

    Yes and no, CPU's this will work fine up to a certain point, similar to what we see in Ryzen, then we saw the pitfalls it can have, specially on the serverside with Epyc, this wasn't by happenstance Intel and AMD came out with similar things around the same time (although AMD came out with in sooner in products by a few months). That means the designs on both companies were there for years before. So why did it happen now? It happened now for both of them because there are physical limitations that were removed and cost benefit ratio now makes sense for BOTH companies.

    Now having said that;

    still have the latency issue to get around ;) GPU's that latency will kill any type of capabilities to have transparency in multiple dies.

    How are they going to get around this, they just need more bandwidth and cache per die. To do that in a consumer GPU product the same cost/benefit ratio has be realized. Until that point, it will never happen.

    Just using an MCM is costly enough to remove it from a consumer line up. This is why nV has not gone with HBM or 2, the memory is expensive, the interposer is expensive and the cost of manufacturing and setting up that pipeline is expensive. That expense must be reflected either on margins or the cost of the product. Its not too bad on the CPU side, cause the interconnects don't need to be anywhere near as much throughput as a GPU needs are and the memory is still off the MCM in regular old DIMMs.
     
    Last edited: Aug 13, 2017
  38. Dayman

    Dayman [H]Lite

    Messages:
    70
    Joined:
    Jul 12, 2017

    Thanks for the explanation. So the three biggest problems with MCM designs from what I understand is latency, bandwidth and for GPUs transparency between the dies to they can act as one and not like a SLI or CF solution. Interesting. I find the concept of MCM designs fascinating as Moore's Law is faltering this is a way to get around that looming problem.
     
    razor1 likes this.
  39. Dayman

    Dayman [H]Lite

    Messages:
    70
    Joined:
    Jul 12, 2017
    I'm willing to bet Intel will never say that Moore's law is dying. :D
     
  40. Ieldra

    Ieldra I Promise to RTFM

    Messages:
    3,493
    Joined:
    Mar 28, 2016
    Well, tomorrow is the big day, the era of "wait for Vega" is over, we are on the brink of a new dawn; the bright sun of "wait for Navi" awaits.

    Somewhere I got lost in dusk
    Chasing down the dawn;
    Hopelessly I entered night,
    Hoping for some holy light
    Hiding behind curtains drawn -
    Sunlight turned to dust!

    Roaming through the silver dust
    Settling down at dusk,
    I felt my fascinations drawn
    To thoughts of an alien dawn,
    As stars glowed unholy light
    In the deep abyss of night...

    In the ever-reach of night,
    Darkness, fear and dust
    Drowned out the last reach of light
    That still lingered on from dusk,
    But, no! From my lobe of dawn
    Hope could still be drawn!

    Inspiration could still be drawn,
    Even in darkest night,
    As long as I still believe in dawn
    And dancing motes of morning dust,
    For, even in the ides of dusk
    Shines some glinting light!

    And oh! Such glorious light
    In my eyes are drawn:
    In the twilight hours of dusk -
    In the moonlit gaze of night -
    In the sparkle of the dust
    Singing joy to dawn!

    Yes, I still believe that dawn
    Will greet me with light,
    Even though I swallowed dust
    With every breath I have drawn,
    I can dream an end to night
    As there's an end to dusk!

    And yes, dusk and dust have drawn
    From night a light - forward unto dawn!

    https://allpoetry.com/poem/11952933-Forward-Unto-Dawn-by-Rene-Alexander

    This is not the greatest poem ever written, but Vega isn't really shaping up to the greatest GPU ever developed either.

    Seriously though I'm interested to see how Vega 56 performs and if it is actually available at MSRP in the next few months, Vega 64 looks DoA to me but I'm willing to be surprised