AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

I may have misheard Jen-Hsun Huang presentation at CES. Now I would like to see how well PX2 does in New York city with 100's if not 1000's of pedestrians and yellow cab drivers plowing the roads up :LOL:. The samples shown in the video's with the self driving cars was some what funny compared to real complex driving. I would like to see what these cars can do when other cars run into them, cut right in front and brake - ice roads etc. Not to mention a collision - will they drive off afterwards?

Heck yeah, I mentioned in the past real-world full solution would be at the earliest late 2019, and that would be the 1st samples of a realistic solution-platform end-to-end.
But I repeat it was GTC Europe 2016 that 1st mentioned Volta 'Tegra' Q4 2017, and then at CES this year it is Q3/Q4 2017.
Here is the article even for GTC Europe:
While the company is teasing it today, the SoC won’t begin sampling until Q4 of 2017, and that in turn implies that volume shipments won’t even be until 2018
http://www.anandtech.com/show/10714/nvidia-teases-xavier-a-highperformance-arm-soc
The volume shipments is not what matters, because even Drive PX2 is not in that now and we already have most of the Pascal lineup.
CES statement was more indirect.
Cheers
 
doom just isn't enough. It's pretty much a given it would run well on any new card and AMD does particularly well in doom. We need pricing and benchmarks from a swathe of games and opencl apps to see what market(s) Vega caters to. I suspect it's going to match the 1080 in gaming overall, maybe slightly exceed it, and cost more to build and use more power. I'd like to be wrong about that.
 
You apparently haven't used an AMD card in a long time. I have an RX 480 since about 2 months after launch and I have not had any driver related issues. They are also putting out new drivers quite frequently and being pretty responsive to fixing bugs.

Quite right. I don't get the AMD driver hate. As someone that switched recently from an AMD card to a GTX 1060 I can say firsthand that while nVidia at least will deliver driver fixes in a pretty timely manner their drivers are by no means better than what I was experiencing with AMD for the last few years.
 
If I had to guess, I would say that the RTG group is focusing full on servers too now. Vega 20 for example is Vega 10 on 7nm + DP. Same 4096SP and GFX9.

will vega really be built on GF 7nm? can they do it from the get go for big gpu?
 
Last edited:
It wasn't clear to me. Your sentence structure and wording is confusing at times. However, I'm betting the farm Volta CONSUMER is 2018 (per Nvidias own slides) and a 1080ti sometime this year. 2017 will be a pascal refresh party.
Busy and engineering background and working with multiple countries sorry, and I have gone over this multiple times in multiple threads so losing enthusiasm.
Look Tegra release after some of the pro and consumer dGPUs, it has been this way for a long time.
Tegra X1 is based upon Maxwell, Drive PX2 is based upon Pascal; in both cases these release after certain pro and consumer models of the same architecture.
The Drive PX2 went into real sampling late last year, maybe Q3 onwards.
The official sampling release to manufacturers for Volta Tegra (named Xavier) is Q4 2017 (indirectly at CES this year could even be very late Q3).

Also for 2017 Nvidia has a contractual obligation for 2 massive supercomputers and 1 large supercomputer that must be started (meaning the V100 has to exist and be ready to ship early 2H in a similar way as the P100 did, that means core clients only followed by DGX-1 equivalent and finally as an individual card).
This is further re-inforced by the fact Nvidia and IBM had recently finalised the performance-spec they can implement for these projects and it has now increased from the original estimations, aligning with this at CES this year Nvidia has also revised the performance of the Volta Tegra going into sampling Q3-Q4 and increased it as well when compared to the original estimations.

Also fitting with this is that Nvidia did a totally different strategy to the norm for launches, and did a very costly and incredibly technical-risky approach by reversing the order of launch; it caught everyone out as they started with 610mm2 die and then worked downwards.
This would normally be a no-no, especially when it involves a node shrink and a new (for Nvidia) complex memory-controller technology in HBM2.
Now ask yourself why they would take such a big risk, and then consider that this is one generation before their big supercomputer contracts while Pascal was also brought in as a stepping stone to Volta that was pushed back; this was not the original plan and we see possibly a similar strategy from AMD with Vega-20 and Navi.
A reason would be to reduce risk by spreading new technology between both Pascal and Volta, hence Pascal is part of the technical milestone towards Volta.
Anyway it is standard engineering practice not to introduce all the new technologies-manufacturing changes (such as node fab shrink) at once to assist risk management and domino effect it can have on development-manufacturing cycle.
So due to factual information that exists pertaining to both Xavier and also the supercomputer contracts (date for them was 2017) it was always going to be this year and not 10nm.
This also means we are highly likely to see 1 or 2 consumer Volta dGPU Q3 or later as well this year, and likely to be 1080 replacement as again Volta will be starting with a 600mm2 die for Tesla as a 'V100'.
Cheers
 
Last edited:
Busy and engineering background and working with multiple countries sorry, and I have gone over this multiple times in multiple threads so losing enthusiasm.
Look Tegra release after some of the pro and consumer dGPUs, it has been this way for a long time.
Tegra X1 is based upon Maxwell, Drive PX2 is based upon Pascal; in both cases these release after certain pro and consumer models of the same architecture.
The Drive PX2 went into real sampling late last year, maybe Q3 onwards.
The official sampling release to manufacturers for Volta Tegra (named Xavier) is Q4 2017 (indirectly at CES this year could even be very late Q3).

Also for 2017 Nvidia has a contractual obligation for 2 massive supercomputers and 1 large supercomputer that must be started (meaning the V100 has to exist and be ready to ship early 2H in a similar way as the P100 did, that means core clients only followed by DGX-1 equivalent and finally as an individual card).
This is further re-inforced by the fact Nvidia and IBM had recently finalised the performance-spec they can implement for these projects and it has now increased from the original estimations, aligning with this at CES this year Nvidia has also revised the performance of the Volta Tegra going into sampling Q3-Q4 and increased it as well when compared to the original estimations.

Also fitting with this is that Nvidia did a totally different strategy to the norm for launches, and did a very costly and incredibly technical-risky approach by reversing the order of launch; it caught everyone out as they started with 610mm2 die and then worked downwards.
This would normally be a no-no, especially when it involves a node shrink and a new (for Nvidia) complex memory-controller technology in HBM2.
Now ask yourself why they would take such a big risk, and then consider that this is one generation before their big supercomputer contracts while Pascal was also brought in as a stepping stone to Volta that was pushed back; this was not the original plan and we see possibly a similar strategy from AMD with Vega-20 and Navi.
A reason would be to reduce risk by spreading new technology between both Pascal and Volta, hence Pascal is part of the technical milestone towards Volta.
Anyway it is standard engineering practice not to introduce all the new technologies-manufacturing changes (such as node fab shrink) at once to assist risk management and domino effect it can have on development-manufacturing cycle.
Cheers

Thanks for you insight.
 
Thanks for you insight.
NP, the other big technical achievement/milestone I forgot to mention they needed to introduce now was NVLink in preparation for the Volta supercomputers, so that is a fair amount of technical risk for the very 1st product (Pascal Tesla P100).
Why I am curious if Vega-20 is a technical stepping stone/risk management strategy by AMD for Navi as its timeline seems more aligned with it.
Cheers
 
It also Favors them in DX12, not just Vulkan. You are just trying to come up with excuses because beat the 1080 by about 20% to 25% at 4k (40 to 55 fps (GTX1080OC vs 60 to 75 fps (VEGA)) That is not just because of the API they used.

hmmm, extremely interesting your point of view if we consider the fact that in the 1st purely-made DX12 title (*all the other are just rebranded DX11 games into DX12) , -Gears of War 4-, gives a clear victory for GTX1060 over its competor RX480 :
http://www.hardocp.com/article/2016/11/23/gears_war_4_dx12_performance_review/6#.WG_0n3196Uk :whistle:

[ P.S. Even Ashes of Singularity, which was used by Raja Koduri as flagship in order to advertise RX480 when was 1st introduced, it performs completely equal in both competitive GPUs ( ) ( http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/7 ). (y) ]
 
really, I just gave you the proof that it runs between 40 and 55 fps, most of the time it was below 50 fps and that was a 1080OC. Remember, we are talking 4k, not 1440p or 1080p
You are obviously CPU limited, other users and test sites have managed to do much more than that at 4K and Nightmare.
 
That is not an opinion that is based on facts, which I have linked you to already by HPC programmers.

Software Software Software, something AMD is lagging behind and the resources they need to fix that is not measured in quarters, its measured in YEARS as nV has been pumping money into their software solutions since the g80.

Just because a technology looks good and has features that can be usable, doesn't mean its going to be used right off the bat, adoption rates on new tech when software isn't available or software is working better or made for existing hardware (nV's), creates a major problem and slows it down to a crawl.

Evidently developers must no agree else AMD's market share wouldn't have increased 3 quarters in a row. :)

Were you able to dig something up that nvidia has as competition to the Radeon Pro SSG?
 
Fake news! Jury is out until the cards are in [H]'s hands. And I'd say that for any Nvidia press release about their card's performance as well. Move along, nothing to see here.
 
We all know the quality of the drivers for ati. They have a long, repeatable history of delivering crap drivers. That's not going to change with Vega. I just hope it beats the 1080 to get some real competition both in performance and price. Nvidia needs to be taken down a peg and get competitive again.

AMD needs to be competitive again, I think you mean. Nvidia, competing only with themselves, went from the 900 series to the 1000 series (a very big jump in horsepower) without any pressure from AMD......but I agree with you 100%, people 'excited' for an AMD part should think long and hard about what ruined AMD's previous entries......really lazy driver support. Until that changes, their cards should be considered suspect. Nvidia can be faulted things, but driver support shouldn't be one of them.
 
1) Your dual 290X set up sucks far more power and kicks out far more heat.
2) They did have 8GB.

I don't disagree about the power, and to I was guessing about 8Gbi

My ultimate point was I using 290X cards which are ....

290X GCN 2nd gen
R9 GCN 3rd gen
470 Polaris
Vega

If a Vega is a bit better than 2 - 3 generation old cards that is an unfriendly sign.

Also I'm not sure if that was Vega 10 or Vega 11 but Vega 11 is suppose to be "High End" per the roadmaps I read with Vega 10 "Enthusiast"

If the Vega chip had rolled a rock solid 120 fps then I would be pre-ordering it (once XFX make me a 8GBi DD version natch).
 
This. I find it interesting that fanboys have to find a way to say it's not that significant. When it's beating a GTX1080 by that kind of margin, it IS significant, regardless of whether or not it's DOOM. If this is what AMD launches, we should all be happy that there's finally going to be some competition at the high end.
It is not fanboys but members correcting the fact wrong videos keep being used where the 1080 is set to Nightmare for shadows, the AMD demo was Ultra for all settings.
Pendragon found a video that shows at around 11 minutes on a similar map the 1080 was doing 60fps to 78ish fps.
https://hardforum.com/threads/vega-countdown-has-appeared.1921304/page-12#post-1042738780

On a more inside section it was fluctuating from around 53fps (some very rare drops to 49fps) up to around 86fps.

TBH Doom is a special case because AMD has their Shader Intrinsic Functions extension to work with in Vulkan, so difficult to tell just how much more performance is available from optimisation as that is one feature AMD would have working now on Vega, but yeah I would expect a bit more to come but it needs to beat the 1080 by a fair margin to reflect other games and to have parity with 1080 custom AIBs, that is the fundamental rather than 1080FE.

Look at the 1060 vs 480 for an indicator-reference baseline as we know how this pans out in various games, albeit this test is at 1440p:
14825822190hbGze5NJ0_8_2.png


In theory some developers could use the new alternative primitive shader but I think for quite awhile that will be rarely implemented as it requires work by the developer rather than AMD, and only supported in the Vega model meaning very small demographic.
Cheers
 
Last edited:
really, I just gave you the proof that it runs between 40 and 55 fps, most of the time it was below 50 fps and that was a 1080OC. Remember, we are talking 4k, not 1440p or 1080p


I'm not going to argue with you cause I know what I get, and in another thread I have shown 3 reviews with the 1080 one with nightmare that get 60 and over and nightmare review was 67....... Yes 4k

And it all depends on what level is being played.
 
All of the demos so far point to the fact that Vega is a fair bit faster than Fury X. The card just needs to be released and it will be a hit.
 
Evidently developers must no agree else AMD's market share wouldn't have increased 3 quarters in a row. :)

Were you able to dig something up that nvidia has as competition to the Radeon Pro SSG?


It has nothing to do with that, and I've already showed you in the other thread you crapped on, that the marketshare swings had nothing to do with AMD's new products so why even go there? What can't figure these things out for yourself? And your ability to blind to hard data is great.

Tell me what the specific appilications Radeon Pro SSG will enable and then we can go from there. I can tell you right now, by the way your post history is, you probably have no fuckin clue, and I don't want to hear market spiel I want your thoughts on how its going to be implemented in a wide array of different applications. Aforementioned "HOW"

PS nV has something similar a bit more elegant than just a single SSD (elegant as in flexibility), but it does introduce the need to use their high transport bus tech, but when these cards are going for what 10k ? I think that is a reasonable price looking at the tech, expect companies that buy these to look at all their options first.
 
Last edited:
It also Favors them in DX12, not just Vulkan. You are just trying to come up with excuses because beat the 1080 by about 20% to 25% at 4k (40 to 55 fps (GTX1080OC vs 60 to 75 fps (VEGA)) That is not just because of the API they used.

The graph 2 posts above you shows 1080 at 67fps on average. What do you have to say about that.
 
While I'm vendor agnostic, I do realize I'm in AMD lalaland- but I have to say, I find GFE to be a smooth and fantastically convenient.
lol, I am glad someone does because it makes no sense to me for Nvidia to waste time on that if everyone hates it as much as I. :)
  • I like to set my own game settings and not some program thinking it knows better then me
  • I do not like the pop up windows with stuff I am totally not interested in
  • Only way to check automatically for updates vice the driver being able to do that - LAME
  • Only way to capture screen with Nvidia drivers by having to use this POS
  • Telemetry which now is a mute point since it is just put in the drivers - too bad Nvidia can't put other stuff more useful there like auto checking for driver updates
  • Yet another overlay over Steam Overlay which could yet be another Overlay like with UPlay - conflicts in short
I am so glad that AMD got rid of Raptr and improved their drivers and not interfere with how I like to setup my machine for gaming. I also do other stuff besides gaming and do not need yet another program in the background doing stuff I have zero interest in. That is just me though and glad you find something useful about it and my 2 cents.
 
Well, if you're constantly playing games that are challenging for your system, then GFE might not be the best choice; you can go into the drivers and change the settings there, or do it in game.

However, most games I play aren't there, and GFE is an easy way to put everything on 'Ultra' for the most part; and if a game isn't running fast enough, instead of going down the list of settings and applying research to figure out what works and what doesn't, for you, you just pull down the settings slider. I do this for games that are harder to drive that I'd prefer to run faster (typically get closer to the 165FPS that my monitor supports), such as BF1.

Basically, I've moved beyond choosing individual settings. I want it to look good and work, and GFE makes that happen effortlessly ;).

(I will say that it isn't perfect; I'd prefer more granularity in GFE, and possibly performance targets in the line of an FPS target as well as setting biases)
 
Well, if you're constantly playing games that are challenging for your system, then GFE might not be the best choice; you can go into the drivers and change the settings there, or do it in game.

However, most games I play aren't there, and GFE is an easy way to put everything on 'Ultra' for the most part; and if a game isn't running fast enough, instead of going down the list of settings and applying research to figure out what works and what doesn't, for you, you just pull down the settings slider. I do this for games that are harder to drive that I'd prefer to run faster (typically get closer to the 165FPS that my monitor supports), such as BF1.

Basically, I've moved beyond choosing individual settings. I want it to look good and work, and GFE makes that happen effortlessly ;).

(I will say that it isn't perfect; I'd prefer more granularity in GFE, and possibly performance targets in the line of an FPS target as well as setting biases)
Gotcha, well to me part of gaming is figuring out and finding the best settings. I rarely use motion blur especially if fps are high. FXAA = blur to me and I turn off. FOV will most likely be raised higher then what GFE would ever select. Also depth of field I hate -> so for me GFE messes up my games if I use it. Each their own though, now I do think GFE is much better then Raptr ever was.
 
AMD doesn't need to beat a 1080Ti as long as it's priced appropriately. If they have 1080 performance at 1070 $$$ then I'll buy. If it's the same performance for same price I'll just stick with Nvidia.
The only logical thing said so far.
%5 of you in this [H] forum may own or could afford a 1080. But you whine and puff out your chests demanding AMD best Nvidia's best, but you couldn't afford it if it did because it would be price very similar!
Buy the best your budget affords. AMD will be competitive in every logical price bracket ;)
 
It has nothing to do with that, and I've already showed you in the other thread you crapped on, that the marketshare swings had nothing to do with AMD's new products so why even go there? What can't figure these things out for yourself? And your ability to blind to hard data is great.

Tell me what the specific appilications Radeon Pro SSG will enable and then we can go from there. I can tell you right now, by the way your post history is, you probably have no fuckin clue, and I don't want to hear market spiel I want your thoughts on how its going to be implemented in a wide array of different applications. Aforementioned "HOW"

PS nV has something similar a bit more elegant than just a single SSD (elegant as in flexibility), but it does introduce the need to use their high transport bus tech, but when these cards are going for what 10k ? I think that is a reasonable price looking at the tech, expect companies that buy these to look at all their options first.

Oh, interesting thanks!
Anyway, ive had about enough of your marketing spiel. You cant even accept that AMD gained market share for the last 3 quarters in a row, even though its right in front of your eyes, my God man. Of course they're going to continue to grow even quicker now that they have a very very compeling platform with Naples. 54:46 ROCm SOCm!
 
Last edited:
It has nothing to do with that, and I've already showed you in the other thread you crapped on, that the marketshare swings had nothing to do with AMD's new products so why even go there? What can't figure these things out for yourself? And your ability to blind to hard data is great.
Speaking of crapping, why are you always crapping on enthusiastic posts from AMD fans? Over Christmas break did you go to stores just to tell the kids waiting to see Santa that he isn't real? ;)
 
The market share gains for AMD was out of Nvidia's goodness of heart to give away. Nvidia felt they had too much and wanted to share and give to the needy. :p

Nvidia profits is probably more important then just market share, if Nvidia focus more on what makes money for them, liquidating markets not as profitable might be rather smart. If AMD can't capitalize as in make money from the increase market share then what would be the point?

While AMD is really wanting to get into HPC, machine learning etc. Will they be able to overtake Nvidia's lead, software base etc. This is where the big money is except Nvidia right now is also making a killing on gaming cards due to a large lack of competition from AMD on a large portion of the money making PC gaming market.
 
Speaking of crapping, why are you always crapping on enthusiastic posts from AMD fans? Over Christmas break did you go to stores just to tell the kids waiting to see Santa that he isn't real? ;)

Ah no worries mate, theres no harm done. he may just fade quietly into that good night very shortly anyway.. :-o
 
The market share gains for AMD was out of Nvidia's goodness of heart to give away. Nvidia felt they had too much and wanted to share and give to the needy. :p

Nvidia profits is probably more important then just market share, if Nvidia focus more on what makes money for them, liquidating markets not as profitable might be rather smart. If AMD can't capitalize as in make money from the increase market share then what would be the point?

While AMD is really wanting to get into HPC, machine learning etc. Will they be able to overtake Nvidia's lead, software base etc. This is where the big money is except Nvidia right now is also making a killing on gaming cards due to a large lack of competition from AMD on a large portion of the money making PC gaming market.

Its very possible. From what ive been reading the HSA Naples platform with Vega, ROCm and other accelerators seems to be recieving a lot of excitement. It appears to be real and genuine excitement, not fake looking and scripted IMO. (54:46 in the video)
 
Oh, interesting thanks!
Anyway, ive had about enough of your marketing spiel. You cant even accept that AMD gained market share for the last 3 quarters in a row, even though its right in front of your eyes, my God man. Of course they're going to continue to grow even quicker now that they have a very very compeling platform with Naples. 54:46 ROCm SOCm!


AMD gain is mostly on the gaming market. not so much in HPC and professional space. let give you one simple example. take AMD S9150 for example. performance wise S9150 simply trash nvidia GK110/210. when AMD launch S9150 some people expect the top 10 machine on TOP500 list will be dominated by AMD accelerator. in the end that did not happen at all. instead most HPC client actually waiting for nvidia pascal and intel new xeon phi. go look at TOP500 list right now and see how many machine actually use AMD accelerator. yes AMD making more effort on the software front but so did nvidia. they are not sitting still. you making it out as if the tide will be turn easily just because AMD make some effort. the thing is nvidia already start reaping the benefit and continue moving forward while AMD still building up their solution. if you look at HPC segment right now intel fare even better than AMD.
 
AMD gain is mostly on the gaming market. not so much in HPC and professional space. let give you one simple example. take AMD S9150 for example. performance wise S9150 simply trash nvidia GK110/210. when AMD launch S9150 some people expect the top 10 machine on TOP500 list will be dominated by AMD accelerator. in the end that did not happen at all. instead most HPC client actually waiting for nvidia pascal and intel new xeon phi. go look at TOP500 list right now and see how many machine actually use AMD accelerator. yes AMD making more effort on the software front but so did nvidia. they are not sitting still. you making it out as if the tide will be turn easily just because AMD make some effort. the thing is nvidia already start reaping the benefit and continue moving forward while AMD still building up their solution. if you look at HPC segment right now intel fare even better than AMD.

Well that is what im saying, they have gained share in the professional market for 3 quarters.
 
The market share gains for AMD was out of Nvidia's goodness of heart to give away. Nvidia felt they had too much and wanted to share and give to the needy. :p

Nvidia profits is probably more important then just market share, if Nvidia focus more on what makes money for them, liquidating markets not as profitable might be rather smart. If AMD can't capitalize as in make money from the increase market share then what would be the point?

While AMD is really wanting to get into HPC, machine learning etc. Will they be able to overtake Nvidia's lead, software base etc. This is where the big money is except Nvidia right now is also making a killing on gaming cards due to a large lack of competition from AMD on a large portion of the money making PC gaming market.

Besides the sarcastic jokes you are close to correct. It was Nvidia decreasing volume due to removing old SKUs too fast that caused a large volume dip for them. AMD as such didn't increase its volume.
 
Speaking of crapping, why are you always crapping on enthusiastic posts from AMD fans? Over Christmas break did you go to stores just to tell the kids waiting to see Santa that he isn't real? ;)


I'm not christian so don't celebrate Christmas ;) so if kids don't know if Santa isn't real, that's not my problem :)
 
Oh, interesting thanks!
Anyway, ive had about enough of your marketing spiel. You cant even accept that AMD gained market share for the last 3 quarters in a row, even though its right in front of your eyes, my God man. Of course they're going to continue to grow even quicker now that they have a very very compeling platform with Naples. 54:46 ROCm SOCm!



Can ya show me those professional marketshare numbers?

Still didn't explain to me how SDD will be used in software, so I guess you don't really know.
 
Busy and engineering background and working with multiple countries sorry, and I have gone over this multiple times in multiple threads so losing enthusiasm.
Look Tegra release after some of the pro and consumer dGPUs, it has been this way for a long time.
Tegra X1 is based upon Maxwell, Drive PX2 is based upon Pascal; in both cases these release after certain pro and consumer models of the same architecture.
The Drive PX2 went into real sampling late last year, maybe Q3 onwards.
The official sampling release to manufacturers for Volta Tegra (named Xavier) is Q4 2017 (indirectly at CES this year could even be very late Q3).

Also for 2017 Nvidia has a contractual obligation for 2 massive supercomputers and 1 large supercomputer that must be started (meaning the V100 has to exist and be ready to ship early 2H in a similar way as the P100 did, that means core clients only followed by DGX-1 equivalent and finally as an individual card).
This is further re-inforced by the fact Nvidia and IBM had recently finalised the performance-spec they can implement for these projects and it has now increased from the original estimations, aligning with this at CES this year Nvidia has also revised the performance of the Volta Tegra going into sampling Q3-Q4 and increased it as well when compared to the original estimations.

Also fitting with this is that Nvidia did a totally different strategy to the norm for launches, and did a very costly and incredibly technical-risky approach by reversing the order of launch; it caught everyone out as they started with 610mm2 die and then worked downwards.
This would normally be a no-no, especially when it involves a node shrink and a new (for Nvidia) complex memory-controller technology in HBM2.
Now ask yourself why they would take such a big risk, and then consider that this is one generation before their big supercomputer contracts while Pascal was also brought in as a stepping stone to Volta that was pushed back; this was not the original plan and we see possibly a similar strategy from AMD with Vega-20 and Navi.
A reason would be to reduce risk by spreading new technology between both Pascal and Volta, hence Pascal is part of the technical milestone towards Volta.
Anyway it is standard engineering practice not to introduce all the new technologies-manufacturing changes (such as node fab shrink) at once to assist risk management and domino effect it can have on development-manufacturing cycle.
So due to factual information that exists pertaining to both Xavier and also the supercomputer contracts (date for them was 2017) it was always going to be this year and not 10nm.
This also means we are highly likely to see 1 or 2 consumer Volta dGPU Q3 or later as well this year, and likely to be 1080 replacement as again Volta will be starting with a 600mm2 die for Tesla as a 'V100'.
Cheers

Totally amazed by your answer!! Thanks for the useful info !! :hungry::):hungry::)
 
A comparison between NV cards running 4K Ultra:

DOOM.jpg


And the interesting bits:

Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11. For example, Vega's next-gen compute engine, called the "NCU" (replacing the "CU"), offers flexible 8-bit, 16-bit, and 32-bit operations, but that these must be coded for directly. Some of the other new features in Vega include a texture culling technique that has the potential to reduce VRAM usage by half (again, requiring a software assist), a new programmable geometry pipeline with 2x throughput per clock, and a more flexible "primitive shader." Vega will also have higher clocks and IPC than Polaris, which itself had 15% higher IPC than Fiji.
 
Back
Top