Vega Rumors



Look at how much red they had per quarter and tell me if they made money on Fury since its launch till it was EOL. They didn't make money on it, not only that they went into more red in the first 2 quarters after launch. After EOL Dell was trying to sell these things at 650 a pop when they were selling GTX 1080's for less. You think anyone would pick up Fury X's at that point by the lot?
 
Look at how much red they had per quarter and tell me if they made money on Fury since its launch till it was EOL. They didn't make money on it, not only that they went into more red in the first 2 quarters after launch. After EOL Dell was trying to sell these things at 650 a pop when they were selling GTX 1080's for less. You think anyone would pick up Fury X's at that point by the lot?
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.
 
It is beyond obvious where they will land. 1070, 1080, and slightly above 1080, all at better price points and higher power usage then Nvidia.
Will it be competitive for the price? Of course.

How do you expect better price points when NV has smaller dies, cheaper memory, a year headstart on production optimization, and cards that have already made them a shit ton of money beyond recouping their investment? NV can easily beat AMD on price this gen if they need to.
 
This launch is much worse than Fury X, we are talking about a power differential to the ti model of 125watts which is 50% differential vs 10% differential and the performance is behind by much more.

PS this is the air cooled top end version, the water cooling version won't give much more performance either its more like a trophy wife that isn't even a trophy.

Wait a minute the 1080ti draws about 250 watts and a 1080 is about 200 watts so a stock Vega FE is about 285 watts. Were only talking about a 35 watt difference from top card to top card. Obviously the Nvidia cards suck less juice from the start but not by a ton, it only becomes a bigger issue as you overclock.
 
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.


Look Polaris isn't making them money how the hell do you think Fury X made them money?Which sells by volume 30 40 50 60 times more? Its more like 80 times more volume but still.....

Be realistic......

Per unit gross they are making money of course, but net, that is where it counts cause that is the bottom line, if they are making money or not.

Just look at how many billions in cash they have burned from sales of IP, business units, and fabs. AMD is has not been profitable in many years, its has been much worse recently prior to the Zen launch. Even after the Zen launch they still haven't gone in the black, hopefully that will change.

30% gross margins, is pretty much an automatic loss for any company even service industries, so if we start talking about manufacturing companies, its more losses, then if we start talking about tech companies..... well more than that.
 
Last edited:
Wait a minute the 1080ti draws about 250 watts and a 1080 is about 200 watts so a stock Vega FE is about 285 watts. Were only talking about a 35 watt difference from top card to top card. Obviously the Nvidia cards suck less juice from the start but not by a ton, it only becomes a bigger issue as you overclock.


nope stock with boost 250 watts for the 1080ti, and 180 watts on the gtx 1080, can't forget, nV's cards are made to run with boost with their rated TDP

AMD cards doesn't work that way. its TDP rated pretty much stock without boost.. Also the water cooled is the one that will be higher performance than the gtx 1080, that has a TDP of 375, and then the 285 watt air cooled one is the one that will be comparable to the gtx 1080.

And currently as BZ and Gamers Nexus stated with the FE stock cooler you can't get accurate measurements of performance, its power throttling with frequency drops, and not only that, frame rates are fluctuating but at such a fast time interval the FPS meters don't pick it up all the time.
 
Last edited:
Don't ever guarantee anything 100% if you don't have facts to back it up.
That guarantee is already guaranteed to be false, guarantee is likely accurate in this falsehood, just because an identically clocked Vega with 8GB opposed to 16GB will use less power. That's a rather difficult problem to avoid except for perhaps a leakage problem from binning.

Still its obvious AMD will lag behind nvidia in power/performance and will do for the near future.
I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...
 
That guarantee is already guaranteed to be false, guarantee is likely accurate in this falsehood, just because an identically clocked Vega with 8GB opposed to 16GB will use less power. That's a rather difficult problem to avoid except for perhaps a leakage problem from binning.


I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...

Dude you are comparing non pro drivers to pro drivers, yes it will look favorable, we discussed this before in that light. Why do you think FuryX looks like SHIT compared to Vega FE in pro apps? Yeah FuryX doesn't have pro drivers either........

Lets say this, Fury X was complete pipe of dung when it came to perf/watt when looking at Vega FE in pro applications lol.

Drivers didn't do shit, drivers activated certain pro only features, like what I mentioned before, which help only in pro situations. Gaming isn't like that, and you were also trying to convince people drivers aren't ready for gaming, THAT is BS, drivers should have been ready when Tape out was DONE.

This huge ass lead time, is only because AMD is trying to figure out ways, if at all possible to close the gaps it has to the respective competitors cards.
 
Last edited:
Dude you are comparing non pro drivers to pro drivers, yes it will look favorable, we discussed this before in that light. Why do you think FuryX looks like SHIT compared to Vega FE in pro apps? Yeah FuryX doesn't have pro drivers either........
So are or aren't you saying that drivers can make a significant difference? We know the gaming drivers have all the new architectural features disabled from testing. The very reason no reputable site is bothering to review the card, excluding a few collecting ad revenue and some preliminary testing. New instructions should be there, but those are new features that will need coding for the most part. We know there were a lot of underlying changes based on public documentation. So Vega performs like 1080 with incomplete drivers, yet isn't better than Titan because Titan didn't have complete drivers?
 
nope stock with boost 250 watts for the 1080ti, and 180 watts on the gtx 1080, can't forget, nV's cards are made to run with boost with their rated TDP.

AMD cards doesn't work that way. its TDP rated pretty much stock without boost.

And currently as BZ and Gamers Nexus stated with the FE stock cooler you can't get accurate measurements of performance, its power throttling with frequency drops, and not only that, frame rates are fluctuating but at such a fast time interval the FPS meters don't pick it up all the time.

225 to 250 is where the 1080ti runs and 250 is their TDP, bullshit marketing aside thats the power draw to expect. I look around a bit to make sure I was correct with a few review sites. As for the Vega FE I was going off the very limited info we have, I am not counting the attempt to overclock it. The Vega fe is rated at 300 and 375 seems they were pretty honest so I dont think you can go by what they did previously. but I would prefer to see what the RX does, using the FE to draw conclusions is a bit difficult. It would be nice if AMD could find a way to get speeds up without large power draws but sometimes you cant get everything you want. If it ends up at 300 watts or so and has performance between a 1080 to 1080ti most people wont care about the power draw. If it only hits 1080 performance or a bit less then yeah the power draw will be a issue.
 
So are or aren't you saying that drivers can make a significant difference? We know the gaming drivers have all the new architectural features disabled from testing. The very reason no reputable site is bothering to review the card, excluding a few collecting ad revenue and some preliminary testing. New instructions should be there, but those are new features that will need coding for the most part. We know there were a lot of underlying changes based on public documentation. So Vega performs like 1080 with incomplete drivers, yet isn't better than Titan because Titan didn't have complete drivers?


activating features is one thing, but we are talking about fully functional feature sets already!

Yes all features in silicon have been activated already in drivers! Driver development doesn't work the way; you presume, but the way it works is by making the entire base code and then tweaking to get more performance out of it.

Shit I don't even do it that way in application programming yet you want to really accept that drivers which are at a much lower level programming will have more flexibility?

How are they going to optimize something when parts of the pipeline isn't activated? Does that even make sense? Lets do double or triple or more work then what is necessary? What will the billable hours and rates look like? Another write off which they can't really take a write off for.....

This is an experienced driver team, they will be working in ways that they know will give them the best results with the least amount of money being billed and time necessary to complete tasks as needed or damn close to it.

First scrum meeting they would have wrote down ever single feature and functions into their stories to be broken down to tasks. As iterations go on, they would have found other stories to focus on based on difficulty of implantation, but all of this is done before tape out. By the time tape out is done, all that's left really is application specific optimizations. That is when you spin off the main driver team and bring up other programmers to share the information learned from the main team and go on, the main driver team will then start working on the next project.
 
Last edited:
225 to 250 is where the 1080ti runs and 250 is their TDP, bullshit marketing aside thats the power draw to expect. I look around a bit to make sure I was correct with a few review sites. As for the Vega FE I was going off the very limited info we have, I am not counting the attempt to overclock it. The Vega fe is rated at 300 and 375 seems they were pretty honest so I dont think you can go by what they did previously. but I would prefer to see what the RX does, using the FE to draw conclusions is a bit difficult. It would be nice if AMD could find a way to get speeds up without large power draws but sometimes you cant get everything you want. If it ends up at 300 watts or so and has performance between a 1080 to 1080ti most people wont care about the power draw. If it only hits 1080 performance or a bit less then yeah the power draw will be a issue.


Err we already saw the FE, and we know its TDP, you are telling me they are going to change that for RX Vega, this is the same thing they have done in what their last 4 GPU launches?

So you want to assume otherwise than what they are doing right now and before?

Can't see AMD changing that way, when Vega FE is the same way.
 
still not proof just assumptions, not saying it was a money maker but if AMD sold every one they produced then a failure it is not. However the decision not to produce more to cover costs be it R&D or fab time is an issue for the one making that decision not on a GPU that sold all units.

Fiji was aimed to be a 800$ - 850$ GPU.. "da titan X killer at 800$" which ended at 650$ because nvidia launched the 980Ti at 650$ and fucked all the AMD plans cutting their margins.. Fury X ended being priced not by production cost, but R&D both for the GPU and HBM partnership, marketing and logistics.. they cutted the profit and that can be assumed as a "lost".. same applied to the Fury Nano, thar card had no reason to cost the same as the Fury X however it was priced the same.
 
so it's competitive with a 1080 and comes close to a 1080ti FE in some instances when it's overclocked and pulling close to 400 watts? Fuck that. Was really hoping for it to be a competitive card so I could get rid of my pair of 1070s and use freesync.
 
nope stock with boost 250 watts for the 1080ti, and 180 watts on the gtx 1080, can't forget, nV's cards are made to run with boost with their rated TDP


And currently as BZ and Gamers Nexus stated with the FE stock cooler you can't get accurate measurements of performance, its power throttling with frequency drops, and not only that, frame rates are fluctuating but at such a fast time interval the FPS meters don't pick it up all the time.

Do you mean thermal throttling ? Power throttle should not depend on the cooling :p
 
Yes all features in silicon have been activated already in drivers! Driver development doesn't work the way; you presume, but the way it works is by making the entire base code and then tweaking to get more performance out of it.
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.

Shit I don't even do it that way in application programming yet you want to really accept that drivers which are at a much lower level programming will have more flexibility?
Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.

How are they going to optimize something when parts of the pipeline isn't activated?
Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.

Err we already saw the FE, and we know its TDP, you are telling me they are going to change that for RX Vega, this is the same thing they have done in what their last 4 GPU launches?
That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.
 
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.

Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.


Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.


If you have worked on a single team in agile on a program, you would know you are just not making any sense man.

IT DOES NOT WORK THAT WAY

Do you know what the critical path is when creating applications, its what you need to get done first before anything else can be done. In most cases, its functionality and ensuring that functionality is working in proper order. Even with agile implementation, that still has to be done first, then and only then can anything else be done.

So in this case things like the rasterizer, which affects the entire graphics pipeline, has to be functional. No way around it. Things like its front end changes and polygon through put, have to be working in order to get other parts of the chip up and going in drivers. After everything that is important in the architecture to get the proper results are done, then you optimize for performance, by what ever they need to do. Tweaking shaders, tweaking instruction, reducing cache latency or hiding it based on program and GPU needs etc.

This is not a half assed attempt at getting something out the door and fixing things afterwords. Its not the same driver team AMD had for the 8500 or the OGL driver team. No sir.

Ryzen feature set was all there, the ram speeds have to do with tweaks, and shit it didn't do much for them. Its not going to. It was easy to see they have to get close to 10k mhz to cover that latency up .
That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.

Oh so now were are going to extraneous reasoning to give reason to RX Vega?

I already mentioned the less stacks, like 3 pages ago, but that doesn't look like its going to help much in power consumption.
 
Last edited:
Any evidence to support that? All the tests I've seen haven't shown the binning, changes to raster patterns, and have comparatively lower results to past architectures. I'm not presuming anything, just looking at the evidence in front of me. Most of the work AMD likely needs for Vega is compiler based. Even for the hardware stuff they are still rapidly fixing things.


Flexibility in enabling hardware features yeah. All app development I've ever seen, or been a part of, has fully tested new code prior to releasing. From the sounds of it they hit a few snags during development and had to rework some stuff. So it's not overly surprising they pushed a software deadline back a bit prior to release. If they are doing what seems likely, even Nvidia took a while to get similar optimizations working and those resulted in the Kepler to Maxwell gains that everyone seems to think were significant.


Wait until they are activated or optimize based on expectations. Not all that different from Ryzen where it look a while to fix the memory clocks even after release and get away from that interconnect bottleneck. With all the changes it just seems like they didn't have enough engineers to get through all the work.


That much is a given with 16GB to 8GB. That will lower power and there is the possibility even the HBM is binned to some degree. Just as 8-Hi stacks existed, despite not being listed, I'm curious if faster 4-Hi stacks are out there.

I don't think it's limited by memory much in performance given it's performance level and what we know when we memory OC other cards at similar levels.

TDP it might help slightly... but AIB cards with good coolers would be multitudes more beneficial.

Again it'll probably be a great card for the freesync folks. I'd be excited. Not anything fantastic for AMD's finanicals.
 
I need to unsub this thread, it's going in serious circles. When it finally launches and we all laugh our asses (or cry, depending on your team colours) off I'll come review this again.

This card will be fine for the freesync guys who need something better and possibly the cut down versions as miners. The rest just isn't very interesting since FE's launch. They won't be cheap, they won't be particularly fast and the design is slated for replacement as quickly as possible (meaning slightly faster than normal if Raj is serious about keeping his job and reclaiming the glory of RTG)
 
If you have worked on a single team in agile on a program, you would know you are just not making any sense man.

IT DOES NOT WORK THAT WAY

Do you know what the critical path is when creating applications, its what you need to get done first before anything else can be done. In most cases, its functionality and ensuring that functionality is working in proper order. Even with agile implementation, that still has to be done first, then and only then can anything else be done.

So in this case things like the rasterizer, which affects the entire graphics pipeline, has to be functional. No way around it. Things like its front end changes and polygon through put, have to be working in order to get other parts of the chip up and going in drivers. After everything that is important in the architecture to get the proper results are done, then you optimize for performance, by what ever they need to do. Tweaking shaders, tweaking instruction, reducing cache latency or hiding it based on program and GPU needs etc.

This is not a half assed attempt at getting something out the door and fixing things afterwords. Its not the same driver team AMD had for the 8500 or the OGL driver team. No sir.

Ryzen feature set was all there, the ram speeds have to do with tweaks, and shit it didn't do much for them. Its not going to. It was easy to see they have to get close to 10k mhz to cover that latency up .


Oh so now were are going to extraneous reasoning to give reason to RX Vega?

I already mentioned the less stacks, like 3 pages ago, but that doesn't look like its going to help much in power consumption.

Why are we talking about agile :p or even software development. If person you are responding to does not have any experience in software development env, it will be like speaking a foreign language to them.

BTW, I work with someone who used to manage the (or a) driver QA team @ ATI during the radeon 8500-9700 days :p.
 
Wow! a new type of bananas!

It is completely immune to Panama Disease TR4.

5fmziv.jpg
 
Last edited:
It still kills me they didn't put a blower + AIO on it. Is that patented where they can't use it? Even at low speed it would greatly increase their cooling & help passive components.

And for reference, those Fury X thermal images showing it hot as hell were correct (out of the box settings). The review sites that had it cool must have cranked the fan or something. I owned one.... it sounds the same here since I read the VRMs thermally throttle? Any review sites take thermal scans yet?
 
Last edited:
I'm not so sure that is the case. The pro benchmarks compare favorably to Titan in perf/watt. There are technological advantages that could put it further ahead when taken advantage. Yet everyone keeps arguing pro benches are invalid because Titan doesn't have pro drivers. At the same time the same people argue it would take a "miracle" for AMD to gain more than 10% performance from drivers. Because drivers never make that large of a difference...
Your argument falls flat on it's face when you compare to a Quadro GP104, which is faster than Vega FE, just like it is faster in games (in the GTX 1080 form), so GP1004 is faster in both gaming and pro apps while consuming significantly less power. Coincidence? Nope, that is just Vega's true capabilities.
 
If you have worked on a single team in agile on a program, you would know you are just not making any sense man.
Who the hell is using Agile for hardware development? That's just bureaucratic marketing BS no team worth a shit would bother with in the first place as it wastes their time. Any engineer should have learned to break down a project and prioritize from day one of college. Along with basic teamwork skills.

Oh so now were are going to extraneous reasoning to give reason to RX Vega?

I already mentioned the less stacks, like 3 pages ago, but that doesn't look like its going to help much in power consumption.
Simple facts are extraneous now? It's not even less stacks, just smaller ones with more volume to facilitate binning. Unless AMD puts 16GB on everything.

It will lower power consumption by some amount resulting in lower TDP. For that reason alone the lower TDP being slower "guarantee" I responded to breaks down. That's a fairly simple truth, not implying it will make or break power consumption.

Do you know what the critical path is when creating applications, its what you need to get done first before anything else can be done. In most cases, its functionality and ensuring that functionality is working in proper order. Even with agile implementation, that still has to be done first, then and only then can anything else be done.
That takes forever and no company should be doing that at any scale. With hardware often everything needs completed to function properly. Further there should be a senior architect that breaks down all the components and distributes them to teams for as much concurrent development as possible. Putting everyone on the same critical path is a recipe for disaster as they all trip over each other without tightly controlled tasks. Yes there will be dependencies that may need tackled first, but there should be some outline developers can use to work ahead as much as possible. Only exception is a real lack of engineers, but that just falls to simple prioritization.

There's a reason programmers always joke about how adding manpower decreases productivity so a job takes longer. Shortly followed by middle management meetings to update everyone on progress that ultimately brings progress to a screeching halt.

I don't think it's limited by memory much in performance given it's performance level and what we know when we memory OC other cards at similar levels.
Not memory as much as the clocks affecting the bandwidth between cores on Infinity. Hence memory overclocks given Ryzen significant performance boosts indirectly. May be the same case for Vega.

Some of GNs memory overclocks we're increasing performance by around half of their gain. There could be some higher clocked 4-Hi stacks giving an easy 10% that aren't in the catelogs. Just laying out possibilities here.

Your argument falls flat on it's face when you compare to a Quadro GP104, which is faster than Vega FE, just like it is faster in games (in the GTX 1080 form), so GP1004 is faster in both gaming and pro apps while consuming significantly less power. Coincidence? Nope, that is just Vega's true capabilities.
How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference. A difference your example would confirm. Follow Linux driver development and seeing huge swings as new features hit isn't uncommon. You're assuming AMDs pro drivers are fully functional or performance isn't entirely tied to geometry performance where Nvidia has an inherent advantage that is very situational. Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.
 
How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference.
Yes GP104 with pro drivers is faster than GP102 with gaming drivers.
Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.
Nope, AMD themselves confirmed Vega FE is using pro drivers, they even stated TitanXp didn't have advantage of Pro drivers like the Vega FE.
 
Who the hell is using Agile for hardware development? That's just bureaucratic marketing BS no team worth a shit would bother with in the first place as it wastes their time. Any engineer should have learned to break down a project and prioritize from day one of college. Along with basic teamwork skills.

Look it up, there are white papers on it if you want to search for it. Yes they do use it for hardware development. Any company that doesn't are companies that don't need to, pretty much developement timelines aren't bound by market timelines. And if you ever have seen ATi's requirements for driver developers, Agile is a must for them to have.

its not only prioritizing things to be done, that is the first step. That is part of it the agile process. Again you don't know what the hell that is nor what the scope entails. So how do you get from A to Z beats me. Everything seems to be easy for you. Its not. Even logically when looking at this kind of scope its not easy.

When doing Story A (tasks), things might come up where you need to go back and create new stories to accomplish Story A. There are many unknowns when making a new application in this case drivers. The functionality is well know as the chip design team spells everything out for the driver team, but the work the driver team does still has unknowns how to get the functionality fully working in code. That is why an iterative process is necessary to do such things otherwise the fall back method which is what you are hinting at is water fall, water fall does NOT work well with large projects with many moving parts, explained in next para.

The design teams go through agile creation of the design too. The driver team must compile with the design team's time schedules and sprint cycles otherwise things at the end just won't work right. There is also money implications based on using water fall vs agile systems. In agile the cost is upfront higher because of the iterative process (visibility is paramount in agile as it gives the upper management a chance to shift things around if need be), but when using water fall cost is at the back end of the project and there is no visibility nor can a single team really function as a group in this type of methodology, which can break a project, the chances are really high because there are time constraints which increase costs can't cover at that point. This is why water fall went the way of the dodo when it comes to large projects.

The driver team is just an extension of what needs to be done based on what the engineering design team is doing. NOT THE OTHER WAY AROUND. The time lines are set up by the design team, the Sprint cycles are set up by the design team. The driver team is well aware of the changes being done by the design team because they are also part of the scrum meetings the design team is part of, or get their information from the project manager of the design team well before the sprint cycle is over. So then the driver team can adjust their stories and sprint cycles as needed.

Simple facts are extraneous now? It's not even less stacks, just smaller ones with more volume to facilitate binning. Unless AMD puts 16GB on everything.

It will lower power consumption by some amount resulting in lower TDP. For that reason alone the lower TDP being slower "guarantee" I responded to breaks down. That's a fairly simple truth, not implying it will make or break power consumption.

being slower or lower ;) and yeah its not going to make or break power consumption levels. That's why the TDP of RX Vega stayed the same. We already see that with current TDP, Vega FE can't keep its clocks at max boosts. And we also see its power scales extremely aggressively with increase in frequency. Yes its extraneous.


That takes forever and no company should be doing that at any scale. With hardware often everything needs completed to function properly. Further there should be a senior architect that breaks down all the components and distributes them to teams for as much concurrent development as possible. Putting everyone on the same critical path is a recipe for disaster as they all trip over each other without tightly controlled tasks. Yes there will be dependencies that may need tackled first, but there should be some outline developers can use to work ahead as much as possible. Only exception is a real lack of engineers, but that just falls to simple prioritization.

The senior architect is not the one that breaks it down for the driver team ;) Its the senior project manager's job to do that (or scrum master, who is also an engineer). The engineering team is there to fill in any questions as needed though.
There's a reason programmers always joke about how adding manpower decreases productivity so a job takes longer. Shortly followed by middle management meetings to update everyone on progress that ultimately brings progress to a screeching halt.

Depends on the size of the team and how well they function together and how experienced they are. I have had teams with all sr members that work great together and everything gets done with little management, I have had other teams with jr level programmers, they might see eye to eye, but the end of the day it takes longer for them because their code rules are not a solid.

Not memory as much as the clocks affecting the bandwidth between cores on Infinity. Hence memory overclocks given Ryzen significant performance boosts indirectly. May be the same case for Vega.

Significant in the what way, significant as in able to compensate for the latency or significant in your eyes?

In my view, if the latency is still there, its not significant.

How's it falling flat? GP104 is now faster than GP102? The whole argument was drivers can make a rather significant difference. A difference your example would confirm. Follow Linux driver development and seeing huge swings as new features hit isn't uncommon. You're assuming AMDs pro drivers are fully functional or performance isn't entirely tied to geometry performance where Nvidia has an inherent advantage that is very situational. Like RX, we haven't seen the pro Vegas either. Just the FE/Engineering sample edition allowing some developers to get started.

Why does the gp104 with pro drivers hang around Vega FE with pro drivers then? Why is the Vega FE and GP104 around the same with gaming drivers?

GP 104 with pro drivers, is leagues in front of Titan Xp in pro apps too.
 
Last edited:
Nope, AMD themselves confirmed Vega FE is using pro drivers, they even stated TitanXp didn't have advantage of Pro drivers like the Vega FE.
So you can show me a link to the WX or Instinct lines with certified pro drivers? Guess I didn't realize they were out yet. Probably because they haven't released though.

If the gaming drivers aren't quite ready, why would the certified pro stuff be complete? Whose to say the pro tests are any more complete than the gaming ones as neither product is released?
 
So you can show me a link to the WX or Instinct lines with certified pro drivers? Guess I didn't realize they were out yet. Probably because they haven't released though.

If the gaming drivers aren't quite ready, why would the certified pro stuff be complete? Whose to say the pro tests are any more complete than the gaming ones as neither product is released?


They are both ready, optimizations for the gaming drivers, AKA per application optimizations aren't ready. That is a totally different scenario than functionality.
 
Look it up, there are white papers on it if you want to search for it. Yes they do use it for hardware development. Any company that doesn't are companies that don't need to, pretty much developement timelines aren't bound by market timelines. And if you ever have seen ATi's requirements for driver developers, Agile is a must for them to have.
Ok.

https://jobs.amd.com/job/Sunnyvale-3D-Graphics-Driver-Performance-Engineer-CA-94085/411783100/

Only one that mentioned Agile was for a web developer. All the ASIC, power, verification, etc hardware guys apparently don't bother with it. Guess that's why engineering schools don't bother teaching it.
 
Ok.

https://jobs.amd.com/job/Sunnyvale-3D-Graphics-Driver-Performance-Engineer-CA-94085/411783100/

Only one that mentioned Agile was for a web developer. All the ASIC, power, verification, etc hardware guys apparently don't bother with it. Guess that's why engineering schools don't bother teaching it.


Look a little harder

engineers need agile

https://www.indeed.com/viewjob?jk=771b10e98d849983&q=amd+agile&tk=1bl1a32j35t4kf63&from=web

Now for IGPU's

https://www.indeed.com/viewjob?jk=1c5354db9957add1&q=amd+agile+gpu&tk=1bl1a5miq5t4kdls&from=web

Back to AMD

https://jobs.amd.com/key/scrum-master-ba-pm-tx.html

all ya had to do was search for scrum master, and it shows up every where.

That means any team that is working on those kinds of projects need to know agile.

that's why I stated if you remember ATi's job needs, just because its not mentioned anymore doesn't mean they don't use it.

https://jobs.amd.com/job/Markham-MTS-Product-Development-Eng_-ON/404659800/

Hmm a guy that works on GPU's an engineer a nice to have ability to do scrum, agile, kanban (which is another version of agile). Yeah its a requirement, they will teach you if you don't know it. So they don't always write it down.

Still thinking engineering projects don't use agile?

Oracle is using Agile PLM

http://www.oracle.com/us/products/applications/agile/agile-for-semiconductor-1868201.pdf

hmm CUDA drivers

https://nvidia.wd5.myworkdayjobs.co...ior-Software-Engineer---CUDA-Driver_JR1902228

agile is a being used. They will teach you if you don't know it.

http://www.eetimes.com/author.asp?doc_id=1327239

What was I saying about time and budgets?

All large engineering companies have go to agile methodology or some extension of it.

Back to AMD again,

The search for scrum came out well, lets do agile

https://jobs.amd.com/search/?q=agile&locationsearch=
there are three engineers in Singapore with that need..... and quite a few in CA, Guess where the GPU's are made?
 
Last edited:
All Furys sold out, so not a failure at all. May not have out performed Nvidias top performer but it did hang with it.
The regular Furys didn't, retailers were offloading them for $250 until a few months ago.
And how many Fury X's did they even make? There's a reason RX Vega is over a year behind schedule, and it's because AMD put flagships on the backburner. It never even broke out of the "Other" category on the Steam survey.

At this point RX Vega being a repeat of Fury X would be a blessing.
 
Source .. ?


Well OEM's were finally starting to sell Fury X's after the 1080 and 1070 were selling by them 8 months after the launch of Fury X, that is pretty telling too. Pretty much OEM's got a GREAT deal to get rid of Fury X

And as Tainted stated eariler, low end Fury products were still going for 250 bucks just a couple of months ago, a card that is pretty much EOL when Polaris was released. AMD would not have been still making those lower end Fury's it would just kill their margins for no reason at all specially when they had a valid replacement.

Also if you look at inventory numbers after the Fury X launch inventory went up. That wasn't the CPU division. The CPU division was already down in the dumps by that point for a few years. What inventory was stock piling? Fury and r3xx products.

Remember the increase in GPU marketshare when Polaris was launched (first full quarter, then subsequent full quarter when AMD lost ground again), that actually coincided with the OEM's selling Fiji products. Pretty much after Polaris was released there was no need for any Fiji product to be made other than Fury X and Nano. But inventory didn't show up as that. And as Dayaks stated the red become more red after Fiji was launched.

Its easy to see that Fury Products didn't do anything for AMD.

and come on, did anyone think it would? 980ti performance 6 months later @ the same price with higher power usage. What is the use for anyone to switch over or buy Fury X, only people would be ones that prefer AMD over nV, no other valid reasons.
 
Last edited:
Well with Etherium prices coming way down hopefully the video card market will return to normal and people will be able to pick up the card they want. So hopefully for those that want Vega it will be available for them rather then being snapped up by miners depending on the price it comes out at.
 
Back
Top