Armenius
Extremely [H]
- Joined
- Jan 28, 2014
- Messages
- 42,089
As far as I know all NVIDIA cards have supported feature level 12_1 since Maxwell.From an AMD guy. So ahead of Pascal with Binding, Conservative Raster, UAVs, etc.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
As far as I know all NVIDIA cards have supported feature level 12_1 since Maxwell.From an AMD guy. So ahead of Pascal with Binding, Conservative Raster, UAVs, etc.
Sauce?12nm is already ready for production, no one but nV can use it cause its a specific node for them, nV probably helped make it, not sure though or it was tailored to their needs.
TBH if it's pulling 1.3GHz it's about where we expected. Throw in another 10-15% for shitty drivers + optimisation (this is par for the course AMD every damn time they have something remotely new) and you're probably closer to the real figure at about 1.40-1.5x when the dust settles. Water cooled might go 1.6x and a bit perhaps...So its around 30-35% faster than Fury X according to 3dmark... good job AMD
Sauce?
https://techreport.com/news/31582/report-tsmc-set-to-fabricate-volta-and-centriq-on-12-nm-process
TSMC was vague AF.
Anand has 2018 1H HVM start.
Nvidia has said 2018 vague. Which means pretty obviously 2H in 'tech company book of announcement-speak excuses'. They'd say Q1-1H otherwise. 16nm was a rumour so it's definitely not start 2018... 16nm was the only other way.
TBH if it's pulling 1.3GHz it's about where we expected. Throw in another 10-15% for shitty drivers + optimisation (this is par for the course AMD every damn time they have something remotely new) and you're probably closer to the real figure at about 1.40-1.5x when the dust settles. Water cooled might go 1.6x and a bit perhaps...
Its more possible the drivers will be better tuned for gaming, its obvious this card seems to be trying to be a jack of all trades. Not looking to upgrade anytime soon anyway, would just like to see some competition.
Vega looks to be a Fury card built on 14nm, AMD has been tuning their platform for years. They won't be pulling better tuned drivers out of a hat, though they will likely claim too.
Well, the way they fixed it was pretty dirty.I do remember people saying there was no possible way for them to fix the PCI-E power draw issue for the reference RX480 with drivers so....
Rasterization is already in the shader array. Normally it's limited to one wave on the first CU of each shader engine. Probably running continuously without the cadence or designed around it. All the interpolation leaning on LDS hardware. Unless you only render triangles, even the Fiji rate was adequate with async. Primitive shaders will likely kick in with tessellation and running on more than 4 of 256 SIMDs likely increasing throughput. Everything on Vega looks programmable with all the Tier3 features, so devs technically are free to do whatever they want without bothering with the fixed geometry. That's what most of the SIGGRAPH papers were proposing as a faster, more flexible, option.Anarchist4000 I struggle to see how primitive shaders will effectively improve its performance. If you offload rasterization to the shader array as a compute shader you are simply taking away resources from the pixel and compute shader portions of the render. So this will only make sense insofar as the load is balance to provide the optimum framerate which will necessarily be lower than that of the competition in this case (no geometry bottlenecks)
Probably HBM3, which is supposedly in a low cost variant. Best guess is stacked memory like HBM2, but without the interposer. Integrated into their MCMs just like a Ryzen die. Vega, and even Fiji, already do the SSG thing. NVDIMMs, like SSG ideally uses, work well for reading and density, but would likely require a huge cache on the chip.They have already announced navi will use a new memory type. That said I'm wondering if instead they mean SSG and will keep HBM especially if they do MCM - they might go GDDR for the mid-low end cards though like Polaris is.
A card for game devs sort of needs a gaming mode. Even if performance is limited. That way devs can at least start experimenting with all the Tier3 features and patching current games for the RX release. Or devs could just wait until after the release to start patching their games with hardware they haven't seen.Alternatively, AMD could have sent around review cards to avoid the whole mess. And AMD further screwed themselves by including a Gaming Mode, yet people will still claim RX Vega will have better performance when it launches.
Features have different tiers. 12_1 was crafted by limiting 12_0 to what Nvidia supported and making 12_1 features AMD lacked. So AMD has better support for 12_0, which is what most DX12 uses.As far as I know all NVIDIA cards have supported feature level 12_1 since Maxwell.
Rasterization is already in the shader array. Normally it's limited to one wave on the first CU of each shader engine. Probably running continuously without the cadence or designed around it. All the interpolation leaning on LDS hardware. Unless you only render triangles, even the Fiji rate was adequate with async. Primitive shaders will likely kick in with tessellation and running on more than 4 of 256 SIMDs likely increasing throughput. Everything on Vega looks programmable with all the Tier3 features, so devs technically are free to do whatever they want without bothering with the fixed geometry. That's what most of the SIGGRAPH papers were proposing as a faster, more flexible, option.
Just consider Sebbbi's 100% compute game that is now out we discussed a while back. Not quite years off as some suggested. It doesn't even use triangles as I understand. Just Ray marching and voxels. As they gave him a Vega I'm expecting an article soon enough. Or they're hiring him and it was an interview.
A card for game devs sort of needs a gaming mode. Even if performance is limited. That way devs can at least start experimenting with all the Tier3 features and patching current games for the RX release. Or devs could just wait until after the release to start patching their games with hardware they haven't seen.
Don't know who/what Sebbbi is, but probably has async shaders confused with async compute.
So they made a 1080 over a year after the fact that costs more than 2x as much? Should we just start calling it the Vega Failure Edition?
1080 has ECC memory?
Or did you miss the part of this is a workstation card
the p4000 is wayyy below the 1080 in timespy benchmarks
Former rendering lead at Ubisoft and past member of DirectX advisory board. Wrote at least one SIGGRAPH paper and knows what he's doing as a AAA dev. Also a guy AMD hand delivered a Vega too.Don't know who/what Sebbbi is, but probably has async shaders confused with async compute.
So you're suggesting the drivers and way it's been so for years is wrong? I'd suggest you learn how GPUs actually work. Even an entry level college course should suffice. The actual rasterization step isn't that involved. Just get enough pixels to fill a wave and continue. There will be far more pixels than triangles in almost all cases.Doesn't work that way, learn some graphics programming before drawing conclusions like that. You still have to factor in cache retention and that is where the problems will occur.
Claybook, which renders over DirectCompute? Guess it's only announced and not quite released yet. Gameplay video looked cool though. Regardless he should be working on a blog about it with Vega. Guess we wait and see how far off "coming soon" means. At least it looks like he moved on from consoles to PC. So what BS? Just because your facts have a habit of turning upside down in short order doesn't mean I'm full of BS.And no they aren't out yet, Sebbbi was working on a 100% compute application that was not even in beta or alpha at the time 6 months ago. I pmed him 6 months ago when we had this discussion and he stated the same. Stop with the BS. At that time he stated he was no where near ready for release let for testing purposes let a lone a full game on something like that.
So you're suggesting the drivers and way it's been so for years is wrong? I'd suggest you learn how GPUs actually work. Even an entry level college course should suffice. The actual rasterization step isn't that involved. Just get enough pixels to fill a wave and continue. There will be far more pixels than triangles in almost all cases.
Claybook, which renders over DirectCompute? Guess it's only announced and not quite released yet. Gameplay video looked cool though. Regardless he should be working on a blog about it with Vega. Guess we wait and see how far off "coming soon" means. At least it looks like he moved on from consoles to PC. So what BS? Just because your facts have a habit of turning upside down in short order doesn't mean I'm full of BS.
Just consider Sebbbi's 100% compute game that is now out we discussed a while back. Not quite years off as some suggested. It doesn't even use triangles as I understand. Just Ray marching and voxels. As they gave him a Vega I'm expecting an article soon enough. Or they're hiring him and it was an interview.
snip....
Comparing the die size and transistor amounts of current 16nm chips and V100 with 12nm, the transistor destiny didn't change so pretty clear its a modified 16nm node which was also hinted at by nV and TSMC.
The process name is 12nmFF NV or something like that, it has a NV at the end.
Sauce?
https://techreport.com/news/31582/report-tsmc-set-to-fabricate-volta-and-centriq-on-12-nm-process
TSMC was vague AF.
Anand has 2018 1H HVM start.
Nvidia has said 2018 vague. Which means pretty obviously 2H in 'tech company book of announcement-speak excuses'. They'd say Q1-1H otherwise. 16nm was a rumour so it's definitely not start 2018... 16nm was the only other way.
TBH if it's pulling 1.3GHz it's about where we expected. Throw in another 10-15% for shitty drivers + optimisation (this is par for the course AMD every damn time they have something remotely new) and you're probably closer to the real figure at about 1.40-1.5x when the dust settles. Water cooled might go 1.6x and a bit perhaps...
TBH if it's pulling 1.3GHz it's about where we expected. Throw in another 10-15% for shitty drivers + optimisation (this is par for the course AMD every damn time they have something remotely new) and you're probably closer to the real figure at about 1.40-1.5x when the dust settles. Water cooled might go 1.6x and a bit perhaps...
I can't wait for meltdowns when actual gaming card scores barely better
Hey guys – AMD employee speaking here! I want you guys to know that we appreciate the community speaking to each other and discussing our upcoming products. We even come by here ourselves to chat with you every now and then, but we have no control over this subreddit – we do not condone censorship and we’re 100% open to hearing what you all have to say.
I don't check Vega news for a day and suddenly everything goes to shit.
Now AMD employees are getting involved?
You're getting confused with prosumer type gear and workstations.
Prosumer is like e.g. my handycam. It's got enough decent glass and addon parts that it was widely used as a B or small/hidden camera for filming programmes used in national broadcasts. I have a few videos out online used throughout the national education curriculum shot entirely on that camera.
Yet I use it also for holidays and videoing family stuff.
That is prosumer.
For a card that'd be e.g. if I was CAD or video/photo editing part time or semi professionally and wanted to play a few games on the side after a long week of BS...
So recommending a quadro 4000 to someone like me, is an absolute joke. When I want to fire up Doom, or anything demanding, 30fps/1060-1070 performance at best is okay for a 1k+ card eh?
This is why Pro Duo and Vega FE exist with dual driver versions. They are targeted at the original Titan market, which Nvidia quickly stopped catering to and pushed the 'buy two cards, goyim' route.
IF FP16 applications become more popular, this becomes an even more enticing offer for those who can use both capabilities of the card. It's not a purely pro card, neither is it a purely gaming card. It's prosumer. Quadro in comparison is balls deep in workstation..
Ok its official.. We have a dud... time to go back to hibernation.
stream goes down just before he does the power measurements.....
back again.
its using ~ 300 watts in Hitman.
Damn 1080 performance at 300 watts. perf/watt is great!
rx Vega needs to be around 400 bucks to make this thing fly, damn, a gtx 1080 can use 500 watt power supply......
41 frames max settings Hitman 4k.
WOW that is gtx 1070 levels shit.
Damn if this is all cases, this is DOA will maybe except for mining.
Damn if this is all cases, this is DOA will maybe except for mining.
stream goes down just before he does the power measurements.....
back again.
its using ~ 300 watts in Hitman.
Damn 1080 performance at 300 watts. perf/watt is great!
rx Vega needs to be around 400 bucks to make this thing fly, damn, a gtx 1080 can use 500 watt power supply......
41 frames max settings Hitman 4k.
WOW that is gtx 1070 levels shit.