AMD plans to release Vega refresh in 2018

yeah I don't think its going to help much at all either. But I guess its better then waiting on new architecture from AMD.

So, this is a bit of conjecture on my part (because my source would neither confirm or deny it), but from what I infer, Vega refresh is not so much about battling for supremacy with Nvidia.

Rather, there is a concern that AMD would be pushed out of the market with NVIDIA's Volta launch (esp. considering how much it actually costs to produce Vega).

Vega's successor won't be completed in time, so Vega's refresh is what AMD is stuck with.
 
Last edited:
Is this what it has come to?

A random troll posting made up BS, as his own source, and we are supposed to take it seriously?

This is a tech forum. You make bold claims without a source, you deserve to be BANNED.

I think you forgot to look at my track record.

I was the first to say that Vega would be released in three variations: cutdown Vega, air-cooled Vega, and liquid-cooled Vega

I was the first to say that liquid Vega is barely faster than Geforce GTX 1080

I was the first to mention that Vega would be priced higher than expected due to components cost.

I was the first to mention that Vega's launch would miss the publicly announced 1H 2017 deadline.

I was the first to mention that AMD is looking to replace Raja.
 
So, this is a bit of conjecture on my part (because my source would neither confirm or deny it), but from what I infer, Vega refresh is not so much about battling for supremacy with Nvidia.

Rather, there is a concern that AMD would be pushed out of the market with NVIDIA's Volta launch (esp. considering how much it actually costs to produce Vega).

Vega's successor won't be completed in time, so Vega's refresh is what AMD is stuck with.


Well they have already been pushed out of the graphics markets. Mainly because of their lack of innovation architecture wise. While nV's cadence slowed down to 1.5 years per gen with the removal of a "refresh" of a generation, so in essence they sped up their timelines, not slowed down, AMD slowed down their cadence by going more than 5 years, one more generation of this, I don't think they can get back into the graphics race at all unless nV really screws up or slows down, which neither look likely at this point. The cost of innovation will increase substantially if they aren't working over gen to gen and trying to do it all in one lump sum. Going small only worked for AMD before because, nV stuck with Tesla for 3 generations, 1 more generation then they normally did, and of course Fermi had its birthing problems.
 
What hashrates on what algos are you getting?

My Vega 64 *system* is drawing 170W at the wall (total system draw...not just the card) and pulling 2kH/s mining Cryptonight on Nicehash. And that's with an 80+ Bronze PSU, to boot.

According to WhatToMine with my power rate of 13.6c/kWh, I'm netting almost $6/day.
Well I would be most interested in how you did that - mine is utterly terrible compared to yours. Hell before I went to work it was pulling $2.85/day and I just pulled the plug. The 1080Ti was pulling a little over $4.00 a day with NiceHash.

So what is the trick? method etc.

Thanks
 
Those could have easily been thought of though, and had been expected by most people I talk to in the tech world.

And if I'm not mistaken Nvidia won't even be releasing consumer Volta?

We kind of knew a Vega refresh was coming as well, Vega 11 rumors also hinted towards that.

But as far as current gen Vega being basically broken and unfixable on some of it's features that just seems like BS man. I know their marketing team is incompetent,but their engineering and driver team seems decent, doubt they'd make a mistake like that when they know their company has no money.
 
I'm not sure how you are getting those results with that power usage, cause I read the entire thread on cast xmr there, and no one there is getting your results with 170 watts of total system power consumption. Please send me that info too, cause that would be something I would be interested in.




So around 180 watts per Vega with the methods you are saying to do?

That puts its just above 50 cents above a 1080ti in profitability.
 
Is this what it has come to?

A random troll posting made up BS, as his own source, and we are supposed to take it seriously?

This is a tech forum. You make bold claims without a source, you deserve to be BANNED.

No, sorry. You deserve to have your points challenged, which is what is going on here.
 
Since when have high end gamer's cared about power consumption? Heat sure, but power draw, common!

Another way of looking at it is that higher efficiency can ultimately lead to higher performance which is something that high end gamers and enthusiasts simply cannot get enough of. That extra thermal and power headroom can be used to further increase performance just like what Nvidia has been doing for the past few generations and is the "secret sauce" for their dominance over the competition. Not only that but doing more with less also helps with their margins which means more money for R&D and profits.

On the other hand, high power consumption and heat can limit performance, clock speeds, and other things that matter to enthusiasts.
 
I think you forgot to look at my track record.

I was the first to say that Vega would be released in three variations: cutdown Vega, air-cooled Vega, and liquid-cooled Vega

I was the first to say that liquid Vega is barely faster than Geforce GTX 1080

I was the first to mention that Vega would be priced higher than expected due to components cost.

I was the first to mention that Vega's launch would miss the publicly announced 1H 2017 deadline.

I was the first to mention that AMD is looking to replace Raja.

No you weren't. Those things were circulating on various forums before been posted here.
 
Those could have easily been thought of though, and had been expected by most people I talk to in the tech world.

And if I'm not mistaken Nvidia won't even be releasing consumer Volta?

We kind of knew a Vega refresh was coming as well, Vega 11 rumors also hinted towards that.

But as far as current gen Vega being basically broken and unfixable on some of it's features that just seems like BS man. I know their marketing team is incompetent,but their engineering and driver team seems decent, doubt they'd make a mistake like that when they know their company has no money.

Just look at his post history about Vega and you will understand.
 
https://videocardz.com/74260/amds-james-prior-talks-ryzen-2-and-vega-11

Horses mouth,

AMD is the one that has supply issues

It has been confirmed that RX Vega stocks will be increased shortly. This will allow retailers, such as OverclockersUK, to adjust the price accordingly. Our sources have confirmed that AMD is finally supplying partners with Vega chips, which will allow them to introduce custom SKUs in satisfactory number, while reference designs will no longer be produced.
 
I think you forgot to look at my track record.

I was the first to say that Vega would be released in three variations: cutdown Vega, air-cooled Vega, and liquid-cooled Vega

I was the first to say that liquid Vega is barely faster than Geforce GTX 1080

I was the first to mention that Vega would be priced higher than expected due to components cost.

I was the first to mention that Vega's launch would miss the publicly announced 1H 2017 deadline.

I was the first to mention that AMD is looking to replace Raja.
he's right.jpg
 
I think you forgot to look at my track record.

I was the first to say that Vega would be released in three variations: cutdown Vega, air-cooled Vega, and liquid-cooled Vega

I was the first to say that liquid Vega is barely faster than Geforce GTX 1080

I was the first to mention that Vega would be priced higher than expected due to components cost.

I was the first to mention that Vega's launch would miss the publicly announced 1H 2017 deadline.

I was the first to mention that AMD is looking to replace Raja.

All of those points you raise have been discussed elsewhere on the tech sphere way before they materialized.

I mean stuff like cut-down Vega, wtf do you think has been happening for the past 2 decades of GPU binning? LC Vega we saw ages ago in leak images, and after the Fury X, well, too easy.

HBM2 is expensive, we all know. Interposer step adds to cost, we all know.

Vega launched in the 1H 2017 deadline, it's called the Frontier edition.

When did you mention AMD was looking to replace Raja?

If you want to claim a first to something, you make your own tech blog & upload such claims publicly, dated, and then we can determine whether you're first or right when the time comes. But on a tech forum full of discussions on these topics, no, you were not the first. To claim so is egotistical beyond sanity.
 
All of those points you raise have been discussed elsewhere on the tech sphere way before they materialized.

I mean stuff like cut-down Vega, wtf do you think has been happening for the past 2 decades of GPU binning? LC Vega we saw ages ago in leak images, and after the Fury X, well, too easy.

HBM2 is expensive, we all know. Interposer step adds to cost, we all know.

Vega launched in the 1H 2017 deadline, it's called the Frontier edition.

When did you mention AMD was looking to replace Raja?

If you want to claim a first to something, you make your own tech blog & upload such claims publicly, dated, and then we can determine whether you're first or right when the time comes. But on a tech forum full of discussions on these topics, no, you were not the first. To claim so is egotistical beyond sanity.

I saw a good chunk of these posts and pretty sure they were before anyone else was saying it. Check the post history.
 
I saw a good chunk of these posts and pretty sure they were before anyone else was saying it. Check the post history.


Well the only one I can speak for, I stated that Vega will come out around gtx 1080 performance after seeing Polaris ;) the others he definitely stated them don't know if he was the first, wasn't really paying attention to that but pretty sure he was spot on early.
 
Last edited:
I saw a good chunk of these posts and pretty sure they were before anyone else was saying it. Check the post history.

And wrong, they were been discussed on other forums long before the thread even started here.
 
All of those points you raise have been discussed elsewhere on the tech sphere way before they materialized.

If you want to claim a first to something, you make your own tech blog & upload such claims publicly, dated, and then we can determine whether you're first or right when the time comes. But on a tech forum full of discussions on these topics, no, you were not the first. To claim so is egotistical beyond sanity.

Basically this. Good post.
 
Shouldn't that be easily proven with a couple of links?
 
Sigh.
Lot's of guys on Nvidia payroll here.


How hard is it to find, this was one of my early posts about Vega and its performance. August of 2016, And I know I did ones before this, extrapolating performance based on what AMD stated Vega was going to be and the word cloud screenshots that were leaked and Polaris.

All the talk about the front end changes in Polaris, I don't see anything really outside of the tesselation performance increases. I know it has some better throughput though but not the degree AMD elluded to, I think, if I remember correctly AMD stated something like 20% more, I don't see outside of *special* situations. So unless they overhauled their front end from Polaris again.....

What I think the IP changes are is more to specific feature sets, like DX12.1, with better CR's things like that, I don't see or can even fathom them doing radical changes in architecture for Vega to overcome the perf/watt and through put gap respective of their nV counterparts outside of the use of HBM 2.

PS I think the changes AMD has to make for them to really become competitive again, is much deeper then just front end, I think their transistor arrangement and possibly even shader array structure has to change. And this is the area where as transitions to new, smaller nodes has become much more expensive, and time consuming.


Here down memory lane

I am serious, I don't see anything but tweaks that they have talked about. Even their final numbers that they have been saying for everything don't seem much different then what they have had so far, and can be explained from Polaris by doubling up.

This is why ya don't believe the marketing crap cause that is all it is crap.

Look what happened to Polaris, all of what they showed, did it come out true?

Its perf/watt is still just as good as last gen Maxwell products. Yet they were trying to show how much better they were in simulated tests

Then it got its ass handed to it with polygon through put figures. Is it better, yeah, but is it good enough? Nope.

Then you had all those front end changes that were going to improve its IPC. Did that happen? No it wasn't IPC, it was all due to the triangle through put....

So you have the TBR which is what AMD has been really talking about, the memory bus thing is BS they already have that they don't need anything special from Vega. Then you have primitive discard

So TBR what will it give them and primitive discard; the same thing triangle through put (also bandwidth savings) but in limited scenarios as explained in a few videos and websites, needs to be programmed for to get the best out of it through the primitive shaders. So lets forget about all the old games or games coming out in the near future, we have to wait for another 6 months to a year after Vega to see those games take advantage of Vega, maybe even longer.....

So pretty much AMD created a primitive shader pipeline because the underlying problems of GCN, its triangle through put issues would have needed to be gutted and redone, something they could ill afford at this time because of cost and or time. The primitive shaders looks to be a modified CU, so adding extra instructions to their current architecture would be easier for them to do than to gut out GCN to fix these problems.

I have stated this many many times. To create something programmable only makes sense when the performance is right over something that is fixed functioned within a specific transistor amount, but the initial performance should be the same before programmability, cause the extra work should not be something that should be a necessity since older programs or programs coming out before the product with new programmable features won't show any benefits and it will defeat the purpose. Look at what happened with GCN, all that async talk it all turned out to be BS when you start looking at most games only get 5% performance increase at most....

Then you have the hundreds of people that just believed in the marketing hype and it came out its just blah. And we still have those people cause they don't know WTF async is lol.

AMD has been harping on with Vega, then add in the fact they are saying maximum in very specific circumstances based on the program, well there you have it. That sounds like BS to me. Just because these are the two features that AMD has been getting hit hard on by nV and now they have something that is programmable but can't be seen till they use their new primitive shader pipeline at least not the figures they have talked about. Yeah that is a hard pill for me to swallow cause if they did program for it, they would reach up to Maxwell level through put in triangles and not sure how much bandwidth they can save but don't think they will hit Pascal's bandwidth amounts (30% less than Polaris). That is best cases man.

Call me half empty bottle I don't care, but the truth is, AMD showed us something that is the best they could show, and it was Doom. Doom they should be faster than competing nV products. And they didn't really show that. Its somewhere around a 1080 in performance. So all that extra stuff AMD was talking about, Doom is an old game its not going to have any special programming for Vega, so there ya go, you don't get something for nothing.

Then you have TDP figures from their Insight cards which should have a direct correlation to the GPU's in the gaming cards since its the same GPU used in their insight card. And yeah insight card specs are very stringent because of the form factor they go into. So when they say <300 better believe it its going to be more than 250.

Too much info out there that has been released by AMD themselves to keep expectations in order to figure out that what you are saying, is just crazy talk.

So if AMD isn't saying what you are saying, where are you thinking you are getting your info from?

This was when? yeah early this year

And now look at the people responding to me, LOL, do you see the same people here? Yeah they shouldn't even show up in this forum anymore, after the flak they gave me........ They should be embarrassed now to post, but they aren't embarrassed, because their memory is fleeting, but I remember every single person that says, "Razor1, you are wrong" and they can't tell me why I'm wrong.

What is that saying, "better to keep your mouth shut and be thought of as a fool, then open your mouth and removal all doubt"...........
 
Last edited:
Well the only one I can speak for, I stated that Vega will come out around gtx 1080 performance after seeing Polaris ;) the others he definitely stated them don't know if he was the first, wasn't really paying attention to that but pretty sure he was spot on early.
Pretty sure we all knew this one, like I said, if Vega was better than the 1080, they would have been shouting from the roof tops, not hide in the corner and pray nobody noticed it sucked lol.
Not really dude because otherwise, we would be over the the Nvidia subforum crapping all over there.
What is there to crap on, other than the stupid Star Wars edition Titan Xp?
 
Last edited:
The burden of proof is on the guy making the claims he was first.


Look I remember him saying some of those things early on, I don't know if he was the "first" person to say them, because I'm not particularly fancy on looking on other forums, for here he was one of the first.

Pretty sure we all knew this one, like I said, if Vega was better than the 1080, they would have been shouting from the roof tops, not hide in the corner and pray nobody noticed it sucked lol.


And no a lot of people didn't like what I was stating since June of 2016, when I stated Vega was going to be low performance high power usage, was well before they showed off Doom, which was early 2017. 1 Q before. I even stated at that point because of certain things in silicon just can't be done the way AMD was stating at the time ;). This comes from a deep understanding of how chips are made and how software is made to run on them. The people that understood this yeah there were quite a few agreed with me, but the naysayers where a lot more and running amok.

People have short memories (not pointing at you, your posts have always been solid and with credible backing) and they just don't want to admit they were wrong nor do they appreciate knowledge for what its worth, its worth is understanding how these companies operate at what the potential of their products are.

As much insider info I get about AMD and nV products, that inside info is coming from inside, always has an angle on it. So people that don't understand these things yeah they are going to scorn, troll, name call, and just be the general ass. Anything to do everything to make a person feel uncomfortable.

That 6 months extra over Polaris is not enough quote, I can pull that one up, that was also around July/ August of 2016......

If at the time lets say when Vega was Taped out, AMD stated they were no longer using GCN, my tone about Vega would have been completely different, at that point I would not have know if the front end problems would have affected the new architecture. But the moment we knew it was 100% GCN modified, I could see no way Vega was going to make any strides in Perf/watt, that didn't come about by any insider info. But info I did get after the Doom showing, I knew at that point, Vega was Doomed, pun intended.

Now on _mockingbirds behalf and others yeah some of those things he stated could have been logically deduced, but to point blank and come out and say it with certainty, we aren't talking about 1 thing right and 4 things wrong, we are talking about 5 things and more right. ALL RIGHT. You don't get that with drawing a ticket out of a hat.
 
Last edited:
Look I remember him saying some of those things early on, I don't know if he was the "first" person to say them, because I'm not particularly fancy on looking on other forums, for here he was one of the first.

Now on _mockingbirds behalf and others yeah some of those things he stated could have been logically deduced, but to point blank and come out and say it with certainty, we aren't talking about 1 thing right and 4 things wrong, we are talking about 5 things and more right. ALL RIGHT. You don't get that with drawing a ticket out of a hat.

Pretty much this.
Razor could also see some of these issues coming too (and usually is pretty accurate!) but mockingbird was early as hell and had multiple specific points, I would say they are a reliable source. We'll find out in some months anyway and it will be interesting to see who eats humble pie. Maybe mockingbird was not first of first, if not very close on multiple points. I'm not that desparate to go digging it up but certainly one of the first if not..
 
Those could have easily been thought of though, and had been expected by most people I talk to in the tech world.

And if I'm not mistaken Nvidia won't even be releasing consumer Volta?

We kind of knew a Vega refresh was coming as well, Vega 11 rumors also hinted towards that.

One of the worst argument has to be: "Anyone could have thought of that!"

If that's the case, why didn't you say anything?

I know why: It's easy to say, "Anyone could have thought of that!" after-the-fact.
 
Last edited:
But as far as current gen Vega being basically broken and unfixable on some of it's features that just seems like BS man. I know their marketing team is incompetent,but their engineering and driver team seems decent, doubt they'd make a mistake like that when they know their company has no money.

So, one thing I want to clarify here.

At least one feature (specifically, tile-based rasterization) actually works on the current hardware.

By that, I mean, that the feature can be turned on and it can be verified that the feature actually works.

The problem is that performance gain has been minimal, and in some cases, performance actually regresses.

So, I guess, it's not technically "broken and unfixable" in the way you've defined it.
 
Last edited:
One of the biggest contribution that Lisa Su has made recently is to bring "regular order" to RTG.

In other words, she basically killed the hype train.

No more memes like "Poor Volta"

Everyone is suppose to STFU and work quietly.

They are not going to talk about performance until they are certain they can deliver.
 
One of the biggest contribution that Lisa Su has made recently is to bring "regular order" to RTG.

In other words, she basically killed the hype train.

No more memes like "Poor Volta"

Everyone is suppose to STFU and work quietly.

They are not going to talk about performance until they are certain they can deliver.

Just the way it should be.
 
Vega refresh will come enabled with features such as primitive shaders and tile-based rasterization that AMD wasn't able to get working with the initial Vega's release.
I don't understand what are you talking about...
DSBR is already implemented and working for a lot of titles. There is a driver mode for the developers to force the different rasterization forms, so we can compare the results. My engine saves a lot of bandwith (~10-15%) with DSBR, compared to forced legacy mode. And this is without any direct optimization with a tiled forward renderer. These gains are pretty good. Even the Nvidia hardwares don't save that much bandwith, and the engine is already optimized for Maxwell and Pascal.
The primitive shaders may not be implemented. This is hard, because allowing a new shader stage is not useful with the actual APIs, it requires a lot of change in the pipeline. I'm a person who really like GPU-drive pipeline, so I would like to write primitive shaders, because this is a lot better approach compared to the actual shader stages, but on the other hand I understand that breaking the pipeline compatibility in the standard APIs with some new extensions is not a good idea. AMD only wants to implement a driver-based fast path solution, and there are working prototype drivers. With this mode the primitive discarding rate is 2-3x better in general without any code change. Now this may not work with all application, so it might require some profiling, and this is why they need a lot of time.
 
Pretty much this.
Razor could also see some of these issues coming too (and usually is pretty accurate!) but mockingbird was early as hell and had multiple specific points, I would say they are a reliable source. We'll find out in some months anyway and it will be interesting to see who eats humble pie. Maybe mockingbird was not first of first, if not very close on multiple points. I'm not that desparate to go digging it up but certainly one of the first if not..

As early as hell? _Mockingbird only Joined this forum in February and these things were discussed and known about long before that.
 
As early as hell? _Mockingbird only Joined this forum in February and these things were discussed and known about long before that.
This. Unless he was posting under a different ID too.

We'd been hearing a lot of rumblings since summer/fall 2016 when they showed the Doom demo
 
Not really dude because otherwise, we would be over the the Nvidia subforum crapping all over there.

I think it's pretty crazy no one on forums seems to believe that Nvidia markets like this. Every release on every single forum AMD gets bashed, and there isn't an equivalent to that in any other segment of this market for any other vendor.
it's so frustrating.

EA maybe? but they actually actively have made the computer world worse, AMD is the only thing between us and intel and Nvidia telling us how our computers should perform and what functions we need.

We would be paying for individual features like SSX or anti aliasing if there wasn't a competitor in the marketplace like AMD. Just like we are going to be paying for internet service soon.
 
Last edited:
I think it's pretty crazy no one on forums seems to believe that Nvidia markets like this. Every release on every single forum AMD gets bashed, and there isn't an equivalent to that in any other segment of this market for any other vendor.
it's so frustrating.

Some of the CPU trolling is almost as bad. Honestly though I think many of the trolls are fair weather fans that would switch sides under new aliases if, say, AMD was well ahead in performance.
 
I don't understand what are you talking about...
DSBR is already implemented and working for a lot of titles. There is a driver mode for the developers to force the different rasterization forms, so we can compare the results. My engine saves a lot of bandwith (~10-15%) with DSBR, compared to forced legacy mode. And this is without any direct optimization with a tiled forward renderer. These gains are pretty good. Even the Nvidia hardwares don't save that much bandwith, and the engine is already optimized for Maxwell and Pascal.

You are lucky if you see that much..... In Unreal you don't get more then 5% savings with a penalty in performance!

PS can't compare them with Maxwell and Pascal, that is the only way they render, so we don't know how much bandwidth and power is saved by the TBR unless we are told by nV.


The primitive shaders may not be implemented. This is hard, because allowing a new shader stage is not useful with the actual APIs, it requires a lot of change in the pipeline. I'm a person who really like GPU-drive pipeline, so I would like to write primitive shaders, because this is a lot better approach compared to the actual shader stages, but on the other hand I understand that breaking the pipeline compatibility in the standard APIs with some new extensions is not a good idea. AMD only wants to implement a driver-based fast path solution, and there are working prototype drivers. With this mode the primitive discarding rate is 2-3x better in general without any code change. Now this may not work with all application, so it might require some profiling, and this is why they need a lot of time.

Doesn't work that way, primitive discard is already part of vega, it was part of polaris, Primitive shaders don't help with "primitive discard", they help with removal of the geometry bottleneck that is in GCN's traditional pipeline set up, the LACK of enough Geometry units.

Not sure why you are putting primitive shaders and primitive discard together. Cause they aren't the same thing. Programmer?

Primitive shaders don't do discard, discard is done earlier then when pixel shaders are implemented.

I suggest go back into your engine and figure out why primitive discard isn't working well with your engine and AMD hardware, cause that is not something automatically fixed by drivers and implementation of primitive shaders.

Come on guys, I know everyone here aren't all programmers but ya don't need to be a programmer for this stuff not the basics, really, a guy comes here and talks like this? And we are still not questioning the validity of someone who doesn't differentiate primitive discard and primitive shaders?



What did I say primitive discard is done before the pixel shaders ;)

The over all performance increase of this method, we are looking at maybe 10% @ most, nice boost, if functional, but not something spectacularly life changing.

Added note, nV and AMD both have different early primitive discard routines, you can look this up in their Vulkan/Open GL extensions if you want to test them out. Not only this it can be implemented with quite a big benefit for AMD hardware (not all but ya get most of the benefits) in the traditional pipeline. The intrachip bandwidth savings, cache savings, clock cycle savings and won't be there but the amount of polygons discarded before it gets to the pixel shader will be there, which is where most of the burden is on AMD hardware currently.
 
Last edited:
I think it's pretty crazy no one on forums seems to believe that Nvidia markets like this. Every release on every single forum AMD gets bashed, and there isn't an equivalent to that in any other segment of this market for any other vendor.
it's so frustrating.

EA maybe? but they actually actively have made the computer world worse, AMD is the only thing between us and intel and Nvidia telling us how our computers should perform and what functions we need.

We would be paying for individual features like SSX or anti aliasing if there wasn't a competitor in the marketplace like AMD. Just like we are going to be paying for internet service soon.

Some of the CPU trolling is almost as bad. Honestly though I think many of the trolls are fair weather fans that would switch sides under new aliases if, say, AMD was well ahead in performance.

I think it's pretty crazy that people think that subforums are 'safe spaces'. They're not; the easiest way to see what's going on in the forums is with the 'New Posts' function, and it grabs everything.

Further, if you have a problem with AMD getting called out for releasing slower, hotter, and louder products behind their competition, well, reality bites.
 
I think it's pretty crazy no one on forums seems to believe that Nvidia markets like this. Every release on every single forum AMD gets bashed, and there isn't an equivalent to that in any other segment of this market for any other vendor.
it's so frustrating.

EA maybe? but they actually actively have made the computer world worse, AMD is the only thing between us and intel and Nvidia telling us how our computers should perform and what functions we need.

We would be paying for individual features like SSX or anti aliasing if there wasn't a competitor in the marketplace like AMD. Just like we are going to be paying for internet service soon.


Bring that to up at AMD's tech conferences man, THEY are the ones that are producing subpar products lol. Can't blame that on a consumers right to choose which products are better for them right? Sorry but AMD has been losing since nV introduced Maxwell, and there is no turning back the clock.
 
Back
Top