JustReason
razor1 is my Lover
- Joined
- Oct 31, 2015
- Messages
- 2,483
proof?It was a failure, they didn't make money on the Fury line.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
proof?It was a failure, they didn't make money on the Fury line.
proof?
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.Look at how much red they had per quarter and tell me if they made money on Fury since its launch till it was EOL. They didn't make money on it, not only that they went into more red in the first 2 quarters after launch. After EOL Dell was trying to sell these things at 650 a pop when they were selling GTX 1080's for less. You think anyone would pick up Fury X's at that point by the lot?
It is beyond obvious where they will land. 1070, 1080, and slightly above 1080, all at better price points and higher power usage then Nvidia.
Will it be competitive for the price? Of course.
This launch is much worse than Fury X, we are talking about a power differential to the ti model of 125watts which is 50% differential vs 10% differential and the performance is behind by much more.
PS this is the air cooled top end version, the water cooling version won't give much more performance either its more like a trophy wife that isn't even a trophy.
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.
Wait a minute the 1080ti draws about 250 watts and a 1080 is about 200 watts so a stock Vega FE is about 285 watts. Were only talking about a 35 watt difference from top card to top card. Obviously the Nvidia cards suck less juice from the start but not by a ton, it only becomes a bigger issue as you overclock.
That guarantee is already guaranteed to be false, guarantee is likely accurate in this falsehood, just because an identically clocked Vega with 8GB opposed to 16GB will use less power. That's a rather difficult problem to avoid except for perhaps a leakage problem from binning.Don't ever guarantee anything 100% if you don't have facts to back it up.
I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...Still its obvious AMD will lag behind nvidia in power/performance and will do for the near future.
That guarantee is already guaranteed to be false, guarantee is likely accurate in this falsehood, just because an identically clocked Vega with 8GB opposed to 16GB will use less power. That's a rather difficult problem to avoid except for perhaps a leakage problem from binning.
I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...
So are or aren't you saying that drivers can make a significant difference? We know the gaming drivers have all the new architectural features disabled from testing. The very reason no reputable site is bothering to review the card, excluding a few collecting ad revenue and some preliminary testing. New instructions should be there, but those are new features that will need coding for the most part. We know there were a lot of underlying changes based on public documentation. So Vega performs like 1080 with incomplete drivers, yet isn't better than Titan because Titan didn't have complete drivers?Dude you are comparing non pro drivers to pro drivers, yes it will look favorable, we discussed this before in that light. Why do you think FuryX looks like SHIT compared to Vega FE in pro apps? Yeah FuryX doesn't have pro drivers either........
nope stock with boost 250 watts for the 1080ti, and 180 watts on the gtx 1080, can't forget, nV's cards are made to run with boost with their rated TDP.
AMD cards doesn't work that way. its TDP rated pretty much stock without boost.
And currently as BZ and Gamers Nexus stated with the FE stock cooler you can't get accurate measurements of performance, its power throttling with frequency drops, and not only that, frame rates are fluctuating but at such a fast time interval the FPS meters don't pick it up all the time.
So are or aren't you saying that drivers can make a significant difference? We know the gaming drivers have all the new architectural features disabled from testing. The very reason no reputable site is bothering to review the card, excluding a few collecting ad revenue and some preliminary testing. New instructions should be there, but those are new features that will need coding for the most part. We know there were a lot of underlying changes based on public documentation. So Vega performs like 1080 with incomplete drivers, yet isn't better than Titan because Titan didn't have complete drivers?
225 to 250 is where the 1080ti runs and 250 is their TDP, bullshit marketing aside thats the power draw to expect. I look around a bit to make sure I was correct with a few review sites. As for the Vega FE I was going off the very limited info we have, I am not counting the attempt to overclock it. The Vega fe is rated at 300 and 375 seems they were pretty honest so I dont think you can go by what they did previously. but I would prefer to see what the RX does, using the FE to draw conclusions is a bit difficult. It would be nice if AMD could find a way to get speeds up without large power draws but sometimes you cant get everything you want. If it ends up at 300 watts or so and has performance between a 1080 to 1080ti most people wont care about the power draw. If it only hits 1080 performance or a bit less then yeah the power draw will be a issue.
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.
nope stock with boost 250 watts for the 1080ti, and 180 watts on the gtx 1080, can't forget, nV's cards are made to run with boost with their rated TDP
And currently as BZ and Gamers Nexus stated with the FE stock cooler you can't get accurate measurements of performance, its power throttling with frequency drops, and not only that, frame rates are fluctuating but at such a fast time interval the FPS meters don't pick it up all the time.
Do you mean thermal throttling ? Power throttle should not depend on the cooling
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.Yes all features in silicon have been activated already in drivers! Driver development doesn't work the way; you presume, but the way it works is by making the entire base code and then tweaking to get more performance out of it.
Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.Shit I don't even do it that way in application programming yet you want to really accept that drivers which are at a much lower level programming will have more flexibility?
Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.How are they going to optimize something when parts of the pipeline isn't activated?
That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.Err we already saw the FE, and we know its TDP, you are telling me they are going to change that for RX Vega, this is the same thing they have done in what their last 4 GPU launches?
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.
Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.
Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.
That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.
Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.
Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.
That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.
If you have worked on a single team in agile on a program, you would know you are just not making any sense man.
IT DOES NOT WORK THAT WAY
Do you know what the critical path is when creating applications, its what you need to get done first before anything else can be done. In most cases, its functionality and ensuring that functionality is working in proper order. Even with agile implementation, that still has to be done first, then and only then can anything else be done.
So in this case things like the rasterizer, which affects the entire graphics pipeline, has to be functional. No way around it. Things like its front end changes and polygon through put, have to be working in order to get other parts of the chip up and going in drivers. After everything that is important in the architecture to get the proper results are done, then you optimize for performance, by what ever they need to do. Tweaking shaders, tweaking instruction, reducing cache latency or hiding it based on program and GPU needs etc.
This is not a half assed attempt at getting something out the door and fixing things afterwords. Its not the same driver team AMD had for the 8500 or the OGL driver team. No sir.
Ryzen feature set was all there, the ram speeds have to do with tweaks, and shit it didn't do much for them. Its not going to. It was easy to see they have to get close to 10k mhz to cover that latency up .
Oh so now were are going to extraneous reasoning to give reason to RX Vega?
I already mentioned the less stacks, like 3 pages ago, but that doesn't look like its going to help much in power consumption.
Your argument falls flat on it's face when you compare to a Quadro GP104, which is faster than Vega FE, just like it is faster in games (in the GTX 1080 form), so GP1004 is faster in both gaming and pro apps while consuming significantly less power. Coincidence? Nope, that is just Vega's true capabilities.I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...
That's way uglier than the renders lol.
Who the hell is using Agile for hardware development? That's just bureaucratic marketing BS no team worth a shit would bother with in the first place as it wastes their time. Any engineer should have learned to break down a project and prioritize from day one of college. Along with basic teamwork skills.If you have worked on a single team in agile on a program, you would know you are just not making any sense man.
Simple facts are extraneous now? It's not even less stacks, just smaller ones with more volume to facilitate binning. Unless AMD puts 16GB on everything.Oh so now were are going to extraneous reasoning to give reason to RX Vega?
I already mentioned the less stacks, like 3 pages ago, but that doesn't look like its going to help much in power consumption.
That takes forever and no company should be doing that at any scale. With hardware often everything needs completed to function properly. Further there should be a senior architect that breaks down all the components and distributes them to teams for as much concurrent development as possible. Putting everyone on the same critical path is a recipe for disaster as they all trip over each other without tightly controlled tasks. Yes there will be dependencies that may need tackled first, but there should be some outline developers can use to work ahead as much as possible. Only exception is a real lack of engineers, but that just falls to simple prioritization.Do you know what the critical path is when creating applications, its what you need to get done first before anything else can be done. In most cases, its functionality and ensuring that functionality is working in proper order. Even with agile implementation, that still has to be done first, then and only then can anything else be done.
Not memory as much as the clocks affecting the bandwidth between cores on Infinity. Hence memory overclocks given Ryzen significant performance boosts indirectly. May be the same case for Vega.I don't think it's limited by memory much in performance given it's performance level and what we know when we memory OC other cards at similar levels.
How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference. A difference your example would confirm. Follow Linux driver development and seeing huge swings as new features hit isn't uncommon. You're assuming AMDs pro drivers are fully functional or performance isn't entirely tied to geometry performance where Nvidia has an inherent advantage that is very situational. Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.Your argument falls flat on it's face when you compare to a Quadro GP104, which is faster than Vega FE, just like it is faster in games (in the GTX 1080 form), so GP1004 is faster in both gaming and pro apps while consuming significantly less power. Coincidence? Nope, that is just Vega's true capabilities.
Yes GP104 with pro drivers is faster than GP102 with gaming drivers.How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference.
Nope, AMD themselves confirmed Vega FE is using pro drivers, they even stated TitanXp didn't have advantage of Pro drivers like the Vega FE.Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.
Who the hell is using Agile for hardware development? That's just bureaucratic marketing BS no team worth a shit would bother with in the first place as it wastes their time. Any engineer should have learned to break down a project and prioritize from day one of college. Along with basic teamwork skills.
Simple facts are extraneous now? It's not even less stacks, just smaller ones with more volume to facilitate binning. Unless AMD puts 16GB on everything.
It will lower power consumption by some amount resulting in lower TDP. For that reason alone the lower TDP being slower "guarantee" I responded to breaks down. That's a fairly simple truth, not implying it will make or break power consumption.
That takes forever and no company should be doing that at any scale. With hardware often everything needs completed to function properly. Further there should be a senior architect that breaks down all the components and distributes them to teams for as much concurrent development as possible. Putting everyone on the same critical path is a recipe for disaster as they all trip over each other without tightly controlled tasks. Yes there will be dependencies that may need tackled first, but there should be some outline developers can use to work ahead as much as possible. Only exception is a real lack of engineers, but that just falls to simple prioritization.
There's a reason programmers always joke about how adding manpower decreases productivity so a job takes longer. Shortly followed by middle management meetings to update everyone on progress that ultimately brings progress to a screeching halt.
Not memory as much as the clocks affecting the bandwidth between cores on Infinity. Hence memory overclocks given Ryzen significant performance boosts indirectly. May be the same case for Vega.
How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference. A difference your example would confirm. Follow Linux driver development and seeing huge swings as new features hit isn't uncommon. You're assuming AMDs pro drivers are fully functional or performance isn't entirely tied to geometry performance where Nvidia has an inherent advantage that is very situational. Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.
So you can show me a link to the WX or Instinct lines with certified pro drivers? Guess I didn't realize they were out yet. Probably because they haven't released though.Nope, AMD themselves confirmed Vega FE is using pro drivers, they even stated TitanXp didn't have advantage of Pro drivers like the Vega FE.
So you can show me a link to the WX or Instinct lines with certified pro drivers? Guess I didn't realize they were out yet. Probably because they haven't released though.
If the gaming drivers aren't quite ready, why would the certified pro stuff be complete? Whose to say the pro tests are any more complete than the gaming ones as neither product is released?
Ok.Look it up, there are white papers on it if you want to search for it. Yes they do use it for hardware development. Any company that doesn't are companies that don't need to, pretty much developement timelines aren't bound by market timelines. And if you ever have seen ATi's requirements for driver developers, Agile is a must for them to have.
Ok.
https://jobs.amd.com/job/Sunnyvale-3D-Graphics-Driver-Performance-Engineer-CA-94085/411783100/
Only one that mentioned Agile was for a web developer. All the ASIC, power, verification, etc hardware guys apparently don't bother with it. Guess that's why engineering schools don't bother teaching it.
The regular Furys didn't, retailers were offloading them for $250 until a few months ago.All Furys sold out, so not a failure at all. May not have out performed Nvidias top performer but it did hang with it.
Wow! a new type of bananas!
It is completely immune to Panama Disease TR4.
Source .. ?It was a failure, they didn't make money on the Fury line.
Source .. ?
Source .. ?