AMD plans to release Vega refresh in 2018

I think it's pretty crazy that people think that subforums are 'safe spaces'. They're not; the easiest way to see what's going on in the forums is with the 'New Posts' function, and it grabs everything.

Further, if you have a problem with AMD getting called out for releasing slower, hotter, and louder products behind their competition, well, reality bites.

so you ignored what I said completely so you could make a safespace joke.

I called out shady marketing tactics and you went on this tangent. Keep reading what people say and then injecting your own deluded idea of whatever makes you feel good.
Have fun with that nonsense.
You understand they farm this stuff out right? I have a buddy that ran one such farm. I was commenting on a real thing, you decided to take it somewhere else.
forums are the most fucked up form of communication ever. It's all some ridiculous pissing match. Like you legit took it personally that i know Nvidia markets the way I referenced, and decided to lash out at me/AMD.

Man, this universe is fucked.
 
so you ignored what I said completely so you could make a safespace joke.

I called out shady marketing tactics and you went on this tangent. Keep reading what people say and then injecting your own deluded idea of whatever makes you feel good.
Have fun with that nonsense.


Have you even seen AMD mentioned in any of nV's marketing stuff lately? Think the last one was the 1060 vs Polaris, before that no mention of them since Keplar. nV is actively trying to distance themselves from AMD from marketing perspective, so, not sure where you are getting your "nvidia marketing tactics" from.

Yes truly nonsense.

oh ok to factor in your edited post,

that kind of marketing is actually illegal on this site lol and many other sites, if you read the forum rules.

And yeah it has been done, by BOTH AMD AND nV in the past, nV did it first against ATi, ATi/AMD did it recently though and STILL do it in other forms of media.
 
It's not truly nonsense, you refuse reality. Nvidia pays farms to post on forums, and youtube, etc.

And you refuse to believe it's real because of some weird issue in you. I am gonna put you on ignore before you start throwing a hissy fit and try to have me banned.
 
It's not truly nonsense, you refuse reality. Nvidia pays farms to post on forums, and youtube, etc.

And you refuse to believe it's real because of some weird issue in you. I am gonna put you on ignore before you start throwing a hissy fit and try to have me banned.


BS, you don't know that, show me proof they are doing it right now!

When you call out SERIOUS illegal allegations, back that shit up man!

If you don't remember in 2004 or 5 Sony got sued for doing the exact same tactics you just stated on social media networks, this also spans over to forums too. FTC does have regulations against this. So better have the proof the back your statements up.
 
Last edited:
I think it's pretty crazy that people think that subforums are 'safe spaces'. They're not; the easiest way to see what's going on in the forums is with the 'New Posts' function, and it grabs everything.

Further, if you have a problem with AMD getting called out for releasing slower, hotter, and louder products behind their competition, well, reality bites.
Not sure why you quoted me as wanting a 'safe space'. This sub does get well more than its fair share of the flaming though. Criticism is one thing but constant, nigh-obsessive thread crapping is quite another.

Read my post and you'll see that while I agreed with stealthballer123 on the problem of overacting trolling I disagreed on the uniqueness of the issue as well as the probable source
BS, you don't know that, show me proof they are doing it right now!

When you call out SERIOUS illegal allegations, back that shit up man!

If you don't remember in 2004 or 5 Sony got sued for doing the exact same tactics you just stated on social media networks, this also spans over to forums too. FTC does have regulations against this. So better have the proof the back your statements up.

It was a while ago, granted, but NV has admitted to giving forum users free hardware in exchange for community participation.
This article and some of the ones it links to have it straight from the horse's mouth
https://consumerist.com/2006/02/06/did-nvidia-hire-online-actors-to-promote-their-products/

It also seems like they backed away from investigating too far due to some harsh responses NV gave them.
This is was written a while ago but since NV admitted no wrong doing and definitely have a plenty big marketing budget I don't see why they would have stopped.
 
Last edited:
Not sure why you quoted me as wanting a 'safe space'. This sub does get well more than its fair share of the flaming though. Criticism is one thing but constant nigh-obsessive thread crapping is quite another.

Read my post and you'll see that while I agreed with stealthballer123 on the problem of overacting trolling I disagreed on the uniqueness of the issue as well as the probably source


It was a while ago but NV has admitted to giving forum users free hardware in exchange for community participation.
This article and some of the ones it links to have it straight from the horse's mouth
https://consumerist.com/2006/02/06/did-nvidia-hire-online-actors-to-promote-their-products/

It also seems like they backed due to some harsh responses NV gave them before they could completely get to the bottom of things


That was in 2005-6, which I mentioned, AMD did the same thing shortly after too.

The keplar thing that was BS, people were posting about "rumors" come on they were saying keplar was coming out at 300 bucks or something like that, and AMD's flagship was double that. The things they were talking about were SO far off it couldn't even be considered marketing.

He is pulling something up that has been closed down for over a decade now lol

Yeah ok.....

Anything recent, there are strict FTC rules that state anyone that is posting on public electronic communication platforms must disclose information about themselves and their affiliates if the product being talked about is owned by the company that is providing any type of compensation to that person.
 
That was in 2006, which I mentioned, AMD did the same thing shortly after too.

He is pulling something up that has been closed down for over a decade now lol

Yeah ok.....
Damn you replied practically before I hit save. I already made a couple of edits to point out that they are probably still giving hardware away if they think it is legal to do so.

AMD is a business too. They are not saints but that wasn't the point I was trying to address.
 
Damn you replied practically before I hit save. I already made a couple of edits to point out that they are probably still giving hardware away if they think it is legal to do so.

AMD is a business too. They are not saints but that wasn't the point I was trying to address.


np man, yeah it happened before, I agree, but recently, they don't even need to do anything of the sort to sell products.
 
Not sure why you quoted me as wanting a 'safe space'. This sub does get well more than its fair share of the flaming though. Criticism is one thing but constant, nigh-obsessive thread crapping is quite another.

This is a matter of perspective, though. I don't see what you see, but I also don't care who makes the product I use- I care that it works, and I buy and use what works.

Which means that if AMD is underperforming, that's what I'm going to state, and I get flamed for that. Not that I care.


[the safe space concept applies thusly: in this sub, if one has something to say about AMD that isn't favorable, they're likely to get flamed- which is silly]
 
This is a matter of perspective, though. I don't see what you see, but I also don't care who makes the product I use- I care that it works, and I buy and use what works.

Which means that if AMD is underperforming, that's what I'm going to state, and I get flamed for that. Not that I care.


[the safe space concept applies thusly: in this sub, if one has something to say about AMD that isn't favorable, they're likely to get flamed- which is silly]
Well my underperforming Vega 64 is kicking my 1080 Ti ass in mining right now for some unknown reason. Vega 64 for the last 30 min is $5.40/day and the 1080 Ti is about $5/day. Even if the Vega is consuming more power it is still more $/day profit (y). Well in the last 30 min at least :(

Now truth be told this is a rare exception, usually it is the other way around and that is more due to that I mine on those two machines in between other stuff vice optimizing it just for mining. This could catch on as time goes on if Ethereum application ability is used with the massive collective computing power that current blockchain networks have. As in one can up their graphics card budget since one could use it to earn money on the side processing information besides just gaming and professional applications on the local computer.

As for performance/$, RTG and Nvidia from the 1070 Ti /Vega 56 is blurred - above that price point Nvidia wins out if one considers the niche market for the consumer. I just don't see it as a huge gap at all is all.
 
np man, yeah it happened before, I agree, but recently, they don't even need to do anything of the sort to sell products.
Very true the Maxwell and Pascal have been incredibly competitive. The 1070 Ti was just the slightest nudge they had to make lately and even that probably has similar margins as a 1080 since they use cheaper memory
 
Didn't read all four pages of this and I have no real interest in speculation just yet, but, if anyone doubts that manufacturers will release hardware with broken features, you must have forgotten the T+L engine in the S3 Savage2000 chip.
 
In 2006, Roy Taylor was still working for nvidia so no surprises there with the “focus group”, shilling and trolling. When he moved to amd, he brought that marketing strategy with him and I saw the same pattern of incentivizing online influencers and advocacy being done by Amd’s marketing, RTP, etc. Perhaps even more so because they’re the underdog.
 
You are lucky if you see that much..... In Unreal you don't get more then 5% savings with a penalty in performance!

I'm not familiar with UE4 nowadays. I needed a much faster engine, so don't use Unity/UE-like solutions. These are just too slow.

PS can't compare them with Maxwell and Pascal, that is the only way they render, so we don't know how much bandwidth and power is saved by the TBR unless we are told by nV.

Nvidia happily tell you how much you win. ;) I optimized my approach with these informations in mind. Sure it is much worse for a dev to always ask the IHV, but we have to live with that.


Doesn't work that way, primitive discard is already part of vega, it was part of polaris

Are you refering to primitive discard accelerator? Well that's just for the degenerative triangles.

Primitive shaders don't help with "primitive discard"

I don't said that. I said the driver fast path helps in that. The primitive shader is much more. It can replace the domain and geometry shader, and combine the vertex and hull shader stages. AMD said they don't provide a direct access to this feature, but it might change in the future. For now they implemented a prototype fast path that allows the hardware to discard the unneeded attributes before the vertex processing.

I don't know how they use primitive shaders in the future. It can operate on a variety of diferent primitives, so it has a lot of potential, especially for GPU-driven pipeline.
 
I'm not familiar with UE4 nowadays. I needed a much faster engine, so don't use Unity/UE-like solutions. These are just too slow.

You aren't going to get more "speed", I have tested out numerous engines without pixel shaders active, and throughput with UE4 is close to the maximum throughput of graphics cards lol. So again, if you don't know this, I suggest you test it out. You can always make your own shaders if you don't like the performance.



Nvidia happily tell you how much you win. ;) I optimized my approach with these informations in mind. Sure it is much worse for a dev to always ask the IHV, but we have to live with that.

What are you talking about? seems like you don't know how dev rel works.



Are you refering to primitive discard accelerator? Well that's just for the degenerative triangles.

No I'm not. Watch the video I linked, think AMD knows more about the stuff than you do.

https://www.opengl.org/discussion_b...ER_DISCARD?s=3f6b48cfb5f2c9a53e10be07a9f1d4e6

You can see this stuff works for nV too. This is not early discard.....


Early discard works well before the fragment *pixel* not just before. But based the type of renderer it might/ might not work well. This is because its depend on early depth and stencil tests and we know this doesn't work too well with deferred shading.


I don't said that. I said the driver fast path helps in that. The primitive shader is much more. It can replace the domain and geometry shader, and combine the vertex and hull shader stages. AMD said they don't provide a direct access to this feature, but it might change in the future. For now they implemented a prototype fast path that allows the hardware to discard the unneeded attributes before the vertex processing.

I don't know how they use primitive shaders in the future. It can operate on a variety of diferent primitives, so it has a lot of potential, especially for GPU-driven pipeline.


Exactly you don't know how they work, yet you presume to know the performance increase is coming from and how much potential performance uplift there will be, I suggest you don't do that! At the moment you are pulling numbers out of thin air. if you want realistic numbers, just look at the bottlenecks and know what the max the hardware is capable of with out that. You WILL NOT get 2x to 3x performance lol. Keep dreaming. At most the performance benefits without RPM is around 20% in triangle throughput lol with RPM that increases to double that (this will reach Pascal's level of triangle throughput AT THE MOST, AMD needs everything FUNCTIONAL and operating at maximum), and that is if the shader array is doing nothing else other than focusing on triangle operations. There is only so much horse power available and it needs to be split up for the different operations, not just vertex.

So instead of pulling numbers out of thin air, GO BY WHAT YA KNOW, ya know the specs of the card, and ya know when the card gets bottlenecked, easy for ya to figure out the performance uplift (theoretical max).
 
Last edited:
You aren't going to get more "speed", I have tested out numerous engines without pixel shaders active, and throughput with UE4 is close to the maximum throughput of graphics cards lol. So again, if you don't know this, I suggest you test it out. You can always make your own shaders if you don't like the performance.

You should test UE4 with Radeon GPU Profiler, and you will see that the engine will do nearly 500 cache flush before the first draw. With this bug you will loose 10-15% performance in general, but earlier there was no other profiler that allow them to see this problematic behavior.
Rolando from Epic said they will use Radeon GPU Profiler in the future to optimize the renderer, so their future is bright.



What are you talking about? seems like you don't know how dev rel works.


Time for a personal attacks? This is so unprofessional. :)


No I'm not. Watch the video I linked, think AMD knows more about the stuff than you do.

https://www.opengl.org/discussion_b...ER_DISCARD?s=3f6b48cfb5f2c9a53e10be07a9f1d4e6

You can see this stuff works for nV too. This is not early discard.....


Early discard works well before the fragment *pixel* not just before. But based the type of renderer it might/ might not work well. This is because its depend on early depth and stencil tests and we know this doesn't work too well with deferred shading.





Exactly you don't know how they work, yet you presume to know the performance increase is coming from and how much potential performance uplift there will be, I suggest you don't do that! At the moment you are pulling numbers out of thin air. if you want realistic numbers, just look at the bottlenecks and know what the max the hardware is capable of with out that. You WILL NOT get 2x to 3x performance lol. Keep dreaming. At most the performance benefits without RPM is around 20% in triangle throughput lol with RPM that increases to double that (this will reach Pascal's level of triangle throughput AT THE MOST, AMD needs everything FUNCTIONAL and operating at maximum), and that is if the shader array is doing nothing else other than focusing on triangle operations. There is only so much horse power available and it needs to be split up for the different operations, not just vertex.

So instead of pulling numbers out of thin air, GO BY WHAT YA KNOW, ya know the specs of the card, and ya know when the card gets bottlenecked, easy for ya to figure out the performance uplift (theoretical max).

Than I don't understand what are you want to say. I know how discard works. And Vega has a new fast path in the prototype drivers that allow the hardware to discard a lot of unneeded attributes before the vertex processing. This may lead to 2x-3x faster primitive discard rate. Obvisouly you won't get 2x-3x more fps.
 
You should test UE4 with Radeon GPU Profiler, and you will see that the engine will do nearly 500 cache flush before the first draw. With this bug you will loose 10-15% performance in general, but earlier there was no other profiler that allow them to see this problematic behavior.
Rolando from Epic said they will use Radeon GPU Profiler in the future to optimize the renderer, so their future is bright.

So you are attributing 10-15% as that? Should try that with a fully compiled game with UE4, you don't see that behavior.

Thank you very much for saying you don't know shit about UE4. Yeah I have been programming engines for much longer then you, and I know what the possibilities are when it comes to performance. If you say your engine is faster at polygon throughput than UE4, that is just BS, cause UE 4 is very close to theoretical performance, yeah even with that bug you say, so what. You writing your own engine because its "faster" is BS, you are doing it for your own reasons, what ever they may be, which is ok.

Time for a personal attacks? This is so unprofessional. :)

That is personal? seems like you don't know how dev rel works lol. Simple, unless you signed a contract with dev rel and the game dev program they don't say shit. Nor is there anything about how things should be programmed lol. They just give you help if you want it. And if they want to do work on the program (code) that is up the dev to allow that to happen lol.

Is that unprofessional for you? You just assumed BS so I called it out again.


Than I don't understand what are you want to say. I know how discard works. And Vega has a new fast path in the prototype drivers that allow the hardware to discard a lot of unneeded attributes before the vertex processing. This may lead to 2x-3x faster primitive discard rate. Obvisouly you won't get 2x-3x more fps.


It won't get any where near that man lol, that's the problem, your assumption it will even reach around 2x is oblivious to any type of mathematical understanding of where the bottleneck hits vs the Total potential that can be done on AMD hardware.

Turn off all high level lighting, per pixel lighting in your engine, and test it out, just use per vertex lighting. Turn off culling all forms. And test. Fiji will hit a hard limit around 20,000,000 polys. Vega, surprisingly enough right around the same (if using RPM it will go up to 40,000,000), its a bit higher but need to factor in the clocks.... Polaris too. Just have to look at how many GU's are there on a per chip basis and you will know how many polys the chip can process. This has nothing to do with the culling or discard, they only enhance what the possibilities can be for visible polys. So you aren't going to get x2 or x3 from discard enhancements via primitive shaders, just won't happen. Cause Vega will have to discard 75% to 90% more triangles then its doing now. Does that even make mathematical sense to you?

We know exactly where it hits, Maxwell mid range hits around there too. So you tell me why is Maxwell having such an easy time with this amount of polys in their higher end chips of that line? Its not about discard, its about the raw ability to process the geometry in the first place.

AMD can do what ever they can with discard to try to make their pipeline more optimal, but without fixing the problem, where the real problem is, they aren't going to get much out of it. This is why Vega is Vega, AMD didn't have time to fix the geometry through put, because it would have really changed GCN's architecture. Major modifications, everything from cache to rendering. So they did some small tweaks which allow their shader array to be used for GU, and called them primitive shaders. The problem with this, which you mentioned they aren't backwards compatible, so the traditional pipeline must stay. The second problem which you failed to mention, and I did, and you later say I was right it can't get to the x2 or x3 levels, but sidestepped why it wouldn't get that, the use of those resources as geometry units cuts down on possible uses else where.

Its simple Marketing numbers were BS, and you spouted them out as if they can reach them and used your "experienced programmer" routine to make yourself sound like people should listen to you.

Once GCN has allocated certain shader units as GU's, unless that block is flushed (not just the specific ALU's) they can't be used for anything else, unlike doing VS and PS, doing GS in ALU's has its draw backs currently. This is where you aren't going to get that automagical x2 or x3 performance that AMD marketing mentioned EVEN with RPM in the mix for vertex processing in a real world appliciation.

There are more issues then what I have mentioned so far. Should I go on?

Lets go on,

Today's games aren't even hitting the vertex limits either. So expecting anything great out of Vega, even if their new driver fixes anything, which I highly doubt they will, these games will still hit bottlenecks else where in Vega's pipeline. If we start looking at synthetic benchmark suites, we can see Vega's shader array when doing certain pixel shader operations don't work as effectively as Pascal's even Maxwell's shader array, from a FLOP vs FLOP angle. Why is that? Well that means there are other problems which need to be addressed in GCN. But when we start looking at Keplar vs GCN and iterations of GCN, things start to look mighty interesting as we can see the problems with GCN currently don't seem that bad with looking at Keplar.

What we are seeing is GCN is an old architecture and AMD wasn't able to keep up with innovation against nV's cards. Money, time, engineering expertise, all were lacking on AMD's side and it shows.

Edit:

I have highlighted the part that is pertinent for you statement of performance increase from primitive discard via primitive shaders.
 
Last edited:
Consumer Pascal successor (Volta/Ampere?) is rolling out soon, probably within 6 months. Vega v2 better be a heck of an improvement, or else they risk Vega 64 vs GTX 2060. For competition's sake, I hope the revision really does wonders.
 
Are Vega 56 and 64 EOL? Seems like none in stock and sky high prices. Did RTG pull the plug on the first editions and are going with the updated version next year? Basically RTG is utterly absent above the Rx 580. I am wondering how Lisa Su is handling RTG at this time.
 
Are Vega 56 and 64 EOL? Seems like none in stock and sky high prices. Did RTG pull the plug on the first editions and are going with the updated version next year? Basically RTG is utterly absent above the Rx 580. I am wondering how Lisa Su is handling RTG at this time.

They did say that they had stopped reference GPU production and have allocated parts to their AIB partners for custom boards.
 
Based on preliminary benchmarks volta doesn't really look that much faster than full pascal (with the exception of some benchmarks where it is SUBSTANTIALLY faster). Maybe driver related?

If amd can hit 1080ti performance with vega refresh (doubtful but here's to hoping) and undercut by coming in at like $400 I think they will do well
 
Last edited:
Based on preliminary benchmarks volta doesn't really look that much faster than full pascal (with the exception of some benchmarks where it is SUBSTANTIALLY faster). Maybe driver related?

If amd can hit 1080ti performance with vega refresh (doubtful but here's to hoping) and undercut by coming in at like $400 I think they will do well


Pretty much Volta has the horsepower its limited by something in games, most likely bandwidth or it could even be CPU limitations at times.

Nah if they hit 1080ti performance first it has to be priced lower like ya said but also its still going to draw more power.... Vega 2 to be successful, needs to be competitive in all metrics, and pricing according. Right now, everyone knows Volta or nV's next gen is around the corner, no would want a Vega 2 unless it has advantages and that doesn't look likely.
 
Pretty much Volta has the horsepower its limited by something in games, most likely bandwidth or it could even be CPU limitations at times.

Nah if they hit 1080ti performance first it has to be priced lower like ya said but also its still going to draw more power.... Vega 2 to be successful, needs to be competitive in all metrics, and pricing according. Right now, everyone knows Volta or nV's next gen is around the corner, no would want a Vega 2 unless it has advantages and that doesn't look likely.

ehhh, I really think they could cut power consumption drastically with vega refresh just based off of what people undervolting vega have achieved. I don't think they will hit the efficiency of nvidia. The hd4870 was very successful and did not catch nvidia, just came in at a lower price point.
 
ehhh, I really think they could cut power consumption drastically with vega refresh just based off of what people undervolting vega have achieved. I don't think they will hit the efficiency of nvidia. The hd4870 was very successful and did not catch nvidia, just came in at a lower price point.


They literally would have to be giving them away, cause nV also has die size advantages which AMD can't compete with.... and they can't give them away, their margins are already hurting.
 
ehhh, I really think they could cut power consumption drastically with vega refresh just based off of what people undervolting vega have achieved. I don't think they will hit the efficiency of nvidia. The hd4870 was very successful and did not catch nvidia, just came in at a lower price point.

Undervolting is possible only because AMD/RTG had to set the voltage high enough to stabilize all the chips they produced. Not to mention that such a large required voltage means inconsistency in silicon production.
 
Undervolting is possible only because AMD/RTG had to set the voltage high enough to stabilize all the chips they produced. Not to mention that such a large required voltage means inconsistency in silicon production.
I'm assuming voltage was set so high due to yield/supply issues, which I would assume would be fixed by Vega refresh. Here's to wishful thinking.
 
I'm assuming voltage was set so high due to yield/supply issues, which I would assume would be fixed by Vega refresh. Here's to wishful thinking.

different modified node, I would expect it to be fixed too. But that doesn't always mean lower voltages, just a tighter bell curve, could be the same voltages lol.
 
My 64 LC does not like to be undervolted but then it's clock speed is set pretty much to the max that the 64 can handle. And it still does not come close to the 1080 Ti except in a few titles. Currently the Ti is pulling over $7/day mining consistently, the two 1070's over $10. Having a hard time getting Vega to go over $4 without having to go to the blockchain drivers which limits it pretty much to mining use. I have a very hard time believing AMD/RTG will compete against Volta - Nvidia is firing on all cylinders with unlimited supply of Nitrous Oxide.
 
My 64 LC does not like to be undervolted but then it's clock speed is set pretty much to the max that the 64 can handle. And it still does not come close to the 1080 Ti except in a few titles. Currently the Ti is pulling over $7/day mining consistently, the two 1070's over $10. Having a hard time getting Vega to go over $4 without having to go to the blockchain drivers which limits it pretty much to mining use. I have a very hard time believing AMD will compete against Volta - Nvidia is firing on all cylinders with unlimited supply of Nitrous Oxide.


Yeah nV cards extremely profitably right now.

Also added to this, I was contemplating AMD pooling up their resources to a next gen product while maintaining midrange/performance chips and not competing in the high end. It doesn't look good. Pretty much they have to scrap what they have on paper for future designs or accelerate them by a good 5 years. Another words they need to predict what NV and the market will want 5 years down the road, 3 generations ahead. That is not easy to do since NV has the market and will dictate the way the graphics market and software market will go. I don't think AMD can do anything right now or the mid future. Long future, unless nV slows down development it doesn't look likely AMD can do much......

This goes for Intel too, even though Intel has the resources, when you have a market leader that holds all the cards, quite literally, its not easy to create something, first off that will match it, secondly predict future needs cause at that point its not just want nV is doing, its what everyone associated with nV is doing also.
 
They did say that they had stopped reference GPU production and have allocated parts to their AIB partners for custom boards.
BioStar Vega models which are coming out are all Reference models - unless they just look like reference models with a cheapen and cut down board. Reference models are more akin for use with OEMs, except I am not sure too many OEMs are that interested in that. I would not think AMD would not make or have someone make reference models.

http://www.biostar-usa.com/app/en-us/vga/series.php?S_ID=105
 
Yeah nV cards extremely profitably right now.

Also added to this, I was contemplating AMD pooling up their resources to a next gen product while maintaining midrange/performance chips and not competing in the high end. It doesn't look good. Pretty much they have to scrap what they have on paper for future designs or accelerate them by a good 5 years. Another words they need to predict what NV and the market will want 5 years down the road, 3 generations ahead. That is not easy to do since NV has the market and will dictate the way the graphics market and software market will go. I don't think AMD can do anything right now or the mid future. Long future, unless nV slows down development it doesn't look likely AMD can do much......

This goes for Intel too, even though Intel has the resources, when you have a market leader that holds all the cards, quite literally, its not easy to create something, first off that will match it, secondly predict future needs cause at that point its not just want nV is doing, its what everyone associated with nV is doing also.
Well AMD predicted very wrong
  • Saw a decreasing gaming market trend and an increasing compute market and thought APU's would sell well
    • Gaming market instead exploded
    • Intel made APU's which did not look like AMD expected
    • AMD CPU for the APU was terrible :LOL:
    • Made GCN for mostly Compute reasons and tacked on graphics for the growing compute market - power consumption went way up, did not invest in compute type platform resources as in software - Nvidia created Cuda and their compute performance started to increase
    • Nvidia made both a dedicated gaming chip and HPC chip where their midrange gaming sized chips matched and then beat RTG (Master of none design) in gaming. Look at the 1080 GP 104 size and then compare it to Vega :ROFLMAO:, 1080 with virtually half the power keeps up and beats it in many cases
    • Nvidia then made a big size gaming orientated chip GP 102, Titan X and 1080Ti which AMD has nothing to compete against for gaming and graphics on workstations
    • Now Nvidia has something even newer for the consumer, the killer chip, GV 100 available
    • RTG decision to just go with HBM on the high end regardless of all the red flags delayed launch of Vega really over a year with them having nothing to compete gaming wise with Nvidia 1070 and up (while performance of the Fury line was close to the 1070, cost to make and to compete against the 1070 was not there)
Still the end is not near for RTG, Intel 24Cu Vega design should help if readily available. Consoles are still being made which should help. AMD APU is finally better than Intel's equivalent overall. Compute performance on Vega is respectable (just using it maybe difficult). Plus mining has pretty much kept every Vega card made sold. I just wished I could get more out of my Vega without limiting it - I will just have to experiment more.
 
Last edited:
BioStar Vega models which are coming out are all Reference models - unless they just look like reference models with a cheapen and cut down board. Reference models are more akin for use with OEMs, except I am not sure too many OEMs are that interested in that. I would not think AMD would not make or have someone make reference models.

http://www.biostar-usa.com/app/en-us/vga/series.php?S_ID=105

At this point, I think if Biostar wants to sell Vega whatever way they want, AMD will be just grateful enough to sell the chips. Perhaps Biostar is simply cutting costs. Or maybe its their brand, cause a look at their other models they sell, does not inspire much "high-end" pricing. Frankly, they all look reference or cheap, even the 1080 model. Also, Newegg does not have a single Biostar GPU to sell. (There are 1 or 2 selling on Amazon, but eh. Does not look good in terms of Biostar is appealing to consumers in the USA at least beyond the low to mid arenas) Maybe their intended market is different.
 
At this point, I think if Biostar wants to sell Vega whatever way they want, AMD will be just grateful enough to sell the chips. Perhaps Biostar is simply cutting costs. Or maybe its their brand, cause a look at their other models they sell, does not inspire much "high-end" pricing. Frankly, they all look reference or cheap, even the 1080 model. Also, Newegg does not have a single Biostar GPU to sell. (There are 1 or 2 selling on Amazon, but eh. Does not look good in terms of Biostar is appealing to consumers in the USA at least beyond the low to mid arenas) Maybe their intended market is different.
Well BioStar is doing well with mining motherboards and this seems to go along with that trend as well. I have not seen the BioStar Vega's either and not sure if they will be mostly sold in the east. Right now it is pointless to buy Vega at current prices for most folks. Even the list prices for the BioStar boards are high but we won't know until they go on sell.
 
Well BioStar is doing well with mining motherboards and this seems to go along with that trend as well. I have not seen the BioStar Vega's either and not sure if they will be mostly sold in the east. Right now it is pointless to buy Vega at current prices for most folks. Even the list prices for the BioStar boards are high but we won't know until they go on sell.

http://www.biostar-usa.com/app/en-us/index.php

Scroll over the Products tab on their site, you don't even see a inkling Biostar makes gpus. Maybe its just a website mistake, but thats a huge oversight if Biostar intends on selling it in the US. (I cant speak for the rest of world and their market preferences or for Biostar's other web fronts) I think all this means that looking at Biostar as to how AIBs will be treating Vega is not the best idea. At least not alone. Now, considering Biostar's mining focus, and how Biostar is marketing (or not marketing) their GPUs in the U.S, I suppose those cards could be headed for those customers.

http://www.biostar.com.tw/app/en/index.php

Now, on their .tw site, graphics cards do show up. Still they are the same cheap looking models. Now, on their list of partners/storefronts selling in the U.S, only one sells Biostar GPUs, a storefront called Deep In The Mines, and they look like they sell purely mining-oriented stuff. You have pricing for Biostar Vega? We are seeing pricing for other AIB models of Vega show up, but they are in euros and such. In those cases, the cards look non-reference, but as you said, pricing is killing all comparisons to Nvidia.
 
When was that? And I think they have never been too picky about the brand of junk they sell
I built 3 rigs with Biostar boards roughly 10 years ago, when they made these...

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9TL0QvMjQ4NTU3L29yaWdpbmFsL2Jpb3N0YXJfdGE4OTBmeGVfcHJvZmlsZS5qcGc=

http://hwbot.org/community/submission/1004789_namegt_cpu_z_phenom_ii_x4_955_be_7125.42_mhz

That is not my record fyi
 
biostar eek! Well I can't say that, using their motherboards for my mining rigs, they are solid, don't know about their gaming boards, always thought they had less features, pretty much barebones so never used em.

So for me barebones and rigs, 73 of them and all are strong can't complain. Yeah some of usb's (1.0 ones) that aren't workin but that is ok, don't need them
 
Back
Top