PC Perspective Radeon Vega Frontier Edition Live Benchmarking - 4:30PM CDT

Ok, so its not a powerhouse on performance, but lets at least wait for the price before throwing stones at RTG.
I would rather pay 40k for a mustang then 150k for a ferrari..

That is a horrible comparison. Most of the world can't afford a $150k Ferrari in their entire lifetime because of these things called obligations, transportation, responsiblity, etc. Most people in the market for $500 graphics cards, can at some point, spend $700 or $800 if they choose to keep saving for just a little bit longer.

The absolute cheapest this thing can come in at is $500, but at that price AIB's and AMD aren't making much money. For $500, is it worth getting maybe 10% more performance than a GTX 1080 for another 100+ watts of power use and heat dump into your room? Or, better yet, just spend $650-700 and get 25% more performance, more vram, and still less heat dump.
 
Shit at those prices why not make all cache chips as one does cache for a cpu. Hehe

Cache on a CPU is not nearly dense enough. Even with eDRAM,. you'd still need more than the die size of a GPU to hold 16GB ram.

There's a reason why memory sticks have multiple chips in them. It's not easy to put all the information you want to access in just one chip :D

The highest density DDR4 chips hold just 2GB each!

This is why they're stacking dram on-top of processors in cell phones, and also why they're trying to perform the same trick for a GPU (just much higher stacks, and much higher bandwidth). We need multiple chips to hold our data, but it's difficult to get them all to talk to each other at high speed.
 
Last edited:
  • Like
Reactions: N4CR
like this
Let's pretend NCU literally performs the same as GCN (slide deck says otherwise) and that Vega FE here is a die shrunk Fury X clocked 60% higher. But it only performs 10% faster? Something is wrong here, very very wrong or AMD went backwards with their architecture. NCU is supposed to process more geometry per clock. Vega has 4 times the RAM and basic architectural improvements over GCN in Fury. Someone explain these numbers?
 
AMD stock is taking a hit

View attachment 29101

Edit: stock appears unrelated, NV taking similar hit as well

Addenum: all the tech stocks were taking hits yesterday, but tech futures rebounded a bit today. Still not a bad first half for the Dow and S&P 500 (pretty good performance compared to the "first halves" of the last couple of years), but the Nasdaq is still down by a bit (0.06%).

/sidetrack off

With the mining craze, I couldn't hold out any longer and sold my Zotac GTX 1070 Mini recently, using the proceeds to help offset my purchase of an EVGA GTX 1080 Ti Black Edition. I'm quite glad I did -- the performance differential in 4K is immense, and a slight overclock allowed me to hit that "ideal" 2000 MHz mark.

As for the washed out colors of NVIDIA, my choice of a VA panel 4K TV goes some minor way in mitigating that, I suppose (I do like the blacks).

Vega? Not looking too hot right now (maybe that will be a different story in a year's time with the "gaming card" and "mature drivers"?). But waiting another whole year for drivers to catch up is ridiculous, especially when even the best possible (or most mature) drivers are not likely to allow Vega to match a stock GTX 1080 Ti ...

AMD did pretty nicely with Ryzen (and I'm still debating on a i5-7600k or Ryzen 5 1600x for my new rig). But Vega looks like a "pass" to me. Maybe Navi will get the ball rolling again?
 
I just desperately want somethimg to drive a 2k Freesync monitor properly....Thats my standard. I'd buy a 1080/ti but dropping an extra $700 for a gsync monitor nauseates me
 
This product should of been released 6-8 months ago, not now.

Its so late compared to other cards and Volta now is not that far off (by time the vega gaming card is out).
 
Let's pretend NCU literally performs the same as GCN (slide deck says otherwise) and that Vega FE here is a die shrunk Fury X clocked 60% higher. But it only performs 10% faster? Something is wrong here, very very wrong or AMD went backwards with their architecture. NCU is supposed to process more geometry per clock. Vega has 4 times the RAM and basic architectural improvements over GCN in Fury. Someone explain these numbers?

This is exactly why many are shocked and can't believe the poor performance of Vega FE. AMD claimed an increase in performance per watt yet VEGA @1600MHz is not even faster than an overclocked Fury @ 1125MHz. This is not possible unless the NCU design regressed in IPC significantly.

http://www.3dmark.com/compare/fs/12987144/fs/11047656/fs/6657103#
http://www.3dmark.com/compare/fs/12987144/fs/11047656/fs/6657103#
 
Hardly surprising. And no, I'm not an nVidia fanboy. I'm a graphics card fanboy. I'll buy whatever and whoever is fastest. Trust me, I do not enjoy spending $700 dollars on a video card like I did with my current EVGA SC 1080 ti.

AMD is just not going to catch up to nVidia anytime soon if ever. It does get annoying that there are so many people that are hopeful that AMD can best nVidia and we go thru this each and everytime. I say annoying because history is our best teacher but that often gets ignored

Let's hope AMD prices this card around $325. You can get used 1070's for $300'ish.

They definitely have to price this well under a 1080 or people will absolutely never ever buy it.
 
Hardly surprising. And no, I'm not an nVidia fanboy. I'm a graphics card fanboy. I'll buy whatever and whoever is fastest. Trust me, I do not enjoy spending $700 dollars on a video card like I did with my current EVGA SC 1080 ti.

AMD is just not going to catch up to nVidia anytime soon if ever. It does get annoying that there are so many people that are hopeful that AMD can best nVidia and we go thru this each and everytime. I say annoying because history is our best teacher but that often gets ignored

Let's hope AMD prices this card around $325. You can get used 1070's for $300'ish.

They definitely have to price this well under a 1080 or people will absolutely never ever buy it.

Yeah, price is everything. Don't need AMD to be BETTER than NVIDIA (or Intel for that matter)...Just enough to keep either from settling in. NVIDIA is doing a much better job of staying ahead than Intel is, thick l though, imo.
 
Hardly surprising. And no, I'm not an nVidia fanboy. I'm a graphics card fanboy. I'll buy whatever and whoever is fastest. Trust me, I do not enjoy spending $700 dollars on a video card like I did with my current EVGA SC 1080 ti.

AMD is just not going to catch up to nVidia anytime soon if ever. It does get annoying that there are so many people that are hopeful that AMD can best nVidia and we go thru this each and everytime. I say annoying because history is our best teacher but that often gets ignored

Let's hope AMD prices this card around $325. You can get used 1070's for $300'ish.

They definitely have to price this well under a 1080 or people will absolutely never ever buy it.

You would be lucky to get a 1070 for $400 atm. mining
 
This is exactly why many are shocked and can't believe the poor performance of Vega FE. AMD claimed an increase in performance per watt yet VEGA @1600MHz is not even faster than an overclocked Fury @ 1125MHz. This is not possible unless the NCU design regressed in IPC significantly.

http://www.3dmark.com/compare/fs/12987144/fs/11047656/fs/6657103#

PCPer showed it doesn't really clock at 1600 MHz, they reported it was typically 1400-1450 MHz, and they said peformance over Fury X was 25-40%. So it is fairly in line with clock speed increase.
https://www.pcper.com/reviews/Graph...ion-16GB-Air-Cooled-Review/Conclusionsfor-now
"We are also seeing the new Vega FE product produce gaming results 25-45% faster than the aging R9 Fury X card"

IMO, most recent generation improvement claims from both NVidia and AMD are marketing.

If you look at Maxwell->Pascal, there is no basic IPC improvements. Units X clock speed is the same for both and expect it will be much the same with Vega.
 
Hardly surprising. And no, I'm not an nVidia fanboy. I'm a graphics card fanboy. I'll buy whatever and whoever is fastest. Trust me, I do not enjoy spending $700 dollars on a video card like I did with my current EVGA SC 1080 ti.

AMD is just not going to catch up to nVidia anytime soon if ever. It does get annoying that there are so many people that are hopeful that AMD can best nVidia and we go thru this each and everytime. I say annoying because history is our best teacher but that often gets ignored

Let's hope AMD prices this card around $325. You can get used 1070's for $300'ish.

They definitely have to price this well under a 1080 or people will absolutely never ever buy it.

The going rate for a 1070 is currently $475-600 right now due to the crypto craze and nvidia cards being excellent at mining certain coins these days. It won't last forever, but AMD is even losing it's grip on was once it's area of total dominance in the mining world.

Was going to sell my 1070 for a no cost upgrade to a 1080, instead I bought two 1080Ti's and put it all to work mining and made an easy return on investment :)
 
and now everyone see's why vega has taken so long to come to market. It's broken, they tried to fix it, really haven't been able to, so they were waiting for the cost of HBM2 to drop, so they could price the cards competitively in market.

Don't get me wrong, its still a pretty good compute card for professionals.

Perhaps they still have a few tricks up their sleeve with a gpu infinity fabric coming in the future allowing for multiple small dies on a single card, or at least that's where I hope they are going.
 
Let's pretend NCU literally performs the same as GCN (slide deck says otherwise) and that Vega FE here is a die shrunk Fury X clocked 60% higher. But it only performs 10% faster? Something is wrong here, very very wrong or AMD went backwards with their architecture. NCU is supposed to process more geometry per clock. Vega has 4 times the RAM and basic architectural improvements over GCN in Fury. Someone explain these numbers?

This exactly. We're making a huge fuss about something that is obviously not supposed to be this slow in gaming versions. I'm not expecting huge jumps but 10-20%+ is not unreasonable with optimisation let alone better thermals on water cooled versions. In fact 10-20% on drivers alone is not too unusual for AMD historically.

Let's stop and think for a second; do you really think AMD spent two years making a flagship that's slower than a die shrunk Fiji, less efficient, not using TBR or any uarch tricks they implemented since? It appears on early investigation that drivers are January and behave almost identically to Fiji with almost identical programming.
Basically nothing is taking advantage of new Vega architecture thus there is a slightly reduction in efficiency/clock/regression versus Fiji.

It's pretty clear they wanted to get out pro drivers as stable as possible (hence 6 months old) to meet shareholder deadlines.
Gaming Vega hopefully they'll begin implementing architectural optimisations.

I hate being right on this stuff but it's almost guaranteed that AMD drivers are shit for first 2 weeks or a month or so after launch. Both performance and stability. Obviously in a pro card, stability is key. Hence Jan drivers and Fiji-tier clock/ops peformance.


Or they just made a die shrunk Fiji card and sat on their ass to make it less efficient clock-clock than Fiji using 6 month old drivers.
Makes perfect sense.

If they wanted to simply shrink Fiji we would've seen it a year ago..
 
This exactly. We're making a huge fuss about something that is obviously not supposed to be this slow in gaming versions. I'm not expecting huge jumps but 10-20%+ is not unreasonable with optimisation let alone better thermals on water cooled versions. In fact 10-20% on drivers alone is not too unusual for AMD historically.

Let's stop and think for a second; do you really think AMD spent two years making a flagship that's slower than a die shrunk Fiji, less efficient, not using TBR or any uarch tricks they implemented since? It appears on early investigation that drivers are January and behave almost identically to Fiji with almost identical programming.
Basically nothing is taking advantage of new Vega architecture thus there is a slightly reduction in efficiency/clock/regression versus Fiji.

It's pretty clear they wanted to get out pro drivers as stable as possible (hence 6 months old) to meet shareholder deadlines.
Gaming Vega hopefully they'll begin implementing architectural optimisations.

I hate being right on this stuff but it's almost guaranteed that AMD drivers are shit for first 2 weeks or a month or so after launch. Both performance and stability. Obviously in a pro card, stability is key. Hence Jan drivers and Fiji-tier clock/ops peformance.


Or they just made a die shrunk Fiji card and sat on their ass to make it less efficient clock-clock than Fiji using 6 month old drivers.
Makes perfect sense.

If they wanted to simply shrink Fiji we would've seen it a year ago..

what uarch tricks exactly are you referring to that improve vastly the performance over Fiji?. you know, most (if not all) of the performance gains on polaris are due the higher clocks of those cards in comparison to older ones with same shader configuration, Tonga in example... the fixes they did to their GCN architecture are mostly to mitigate their weakness, so they take less performance hit for example with heavy geometry/polygon output effects such as Tessellation. is not that they gain big performance, it's just they loss less ;)

You pick a Tonga 380X and compare it straight to a RX 470 (same TMU configuration) at the same clocks and guess.. they will perform nearly the same, same with the older brother the 280X, Tonga was 2 revisions newer than the 280X however they performed basically the same with the exception that Tonga didn't take huge FPS hits in heavy tessellated games, you clock a 280X to 1400mhz and it will be a monster card performing..

let's look at some bench..

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9XL0svNjAxMjIwL29yaWdpbmFsL3dpdGNoZXItZnBzLnBuZw==


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9YLzkvNjAxMjQ1L29yaWdpbmFsL2RpdmlzaW9uLWZwcy5wbmc=


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9XL1AvNjAxMjI1L29yaWdpbmFsL3RvbWJyYWlkZXItZnBzLnBuZw==


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9YLzQvNjAxMjQwL29yaWdpbmFsL2d0YXYtZnBzLnBuZw==


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9XL1UvNjAxMjMwL29yaWdpbmFsL2hpdG1hbi1mcHMucG5n


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9XL0YvNjAxMjE1L29yaWdpbmFsL21ldHJvLWZwcy5wbmc=

index.php


index.php


index.php

index.php

So what we got here when we compare the less than 1000mhz 380X and 380 versus 1240mhz+ RX 470?.. how would perform the 380X at 1240mhz+?.. aww yeah, very close to polaris RX 470.

same situation may actually be happening with the VEGA FE Card at ~1400mhz versus the old Fury X at 1050mhz. gains are mostly based on clocks rather than architectural improvements, however there are still issues that may be related to other kinds of processing bottlenecks as may be the case of the ROPS which are just 64...
 
If Vega RX does end up performing near these numbers, the post-mortem of this architecture will be very interesting. Im sure this is not what AMD was expecting when developing.
 
LOL. *Slips on flame retardant suite and hides behind a bunker* :D Hey, the results may not be the same for others by my results were repeatable on my setup, whether others like it or not. (R9 290, R9 380 and now R9 Furies in crossfire. The EVGA 980 Ti looked washed out on my monitor and the other cards did not, at all, just the way it is.) I have no issues if someone wants to own Nvidia and it is worth it for them but, it is not for me and I learned that the hard and expensive way.

Edit: On my setup, it was not just games but the desktop as well. I tried ignoring it but after a while, it was just not something I could ignore anymore and switched back to AMD, which I am happy for. (I just timed it a week or two before I should have because the card I bought was $80 or so less on sale at around $280 and I bought a Fury for $360, just that my timing stinks. :D)

Were you using HDMI by any chance? Because by default the drivers set it to "Limited" on HDMI. Spend 5 seconds and select "Full" and you're good to go. If that is not the issue, more than likely you were looking for a problem and made one up. Which is fine, because you can use what you want. But if Nvidia was not capable of providing proper colors I'm sure [H] and the rest of the well respected reviewers would have noted it by now.
 
The die is also bigger than expected. An entire 564mm2. It was expected around 530mm2.
 
A couple of thoughts.
I see NOTHING in this advertisement that points to Vega being a card for a competitive gaming experience. No sirs. Instead, I see an advertisement for a card aimed at professional users who would normally be shopping for Quadro cards.
It's important to compare apples to apples in these situations and not loose our heads..................................

I said it here, https://hardforum.com/threads/amd-v...ilable-for-pre-order-at-newegg.1937817/page-3, and i'll say it one more time:
It was AMD themselves that compared their VEGA Frontier Edition with the TitanXp at gaming !! ( http://www.pcworld.com/article/3202...vega-frontier-edition-vs-nvidia-titan-xp.html ).
If they don't want reviewers (or people) to use it at their gaming bechmarks, then they shouldn't have done it themselves as well !! Since they were the first who did it, anyone else can do it as well !!
 
I said it here, https://hardforum.com/threads/amd-v...ilable-for-pre-order-at-newegg.1937817/page-3, and i'll say it one more time:
It was AMD themselves that compared their VEGA Frontier Edition with the TitanXp at gaming !! ( http://www.pcworld.com/article/3202...vega-frontier-edition-vs-nvidia-titan-xp.html ).
If they don't want reviewers (or people) to use it at their gaming bechmarks, then they shouldn't have done it themselves as well !! Since they were the first who did it, anyone else can do it as well !!

Reason they do this is as follows;

Vega FE is a sporty 4x4;
With good tyres it can go offroad and do pretty good, as good as decent 4x4s made just for this (workstation)
With good road tyres it can go on road and go quick enough to keep up with some decent cars (game drivers)

It's still not best at both. Quadro is not identical performance to gaming card variants either yet you hear nothing about that in the Vega permanent outrage generation threads.

They want to compare gaming to show what it can do when you have to test game engine or whatever the fuck pro users will do with it.
They also want to show how it does vs a Quadro/WS scenario.

Functionality of both exceeds value of Vega.. this is what marketing is trying to show. You have to sheckel out for a Titan Xp + Quadro for the same performance, because they killed the OG 'true' titan which is closest to this Vega FE.



And let's stop and be rational here.
Do you really think this is actually indicative of the final performance numbers?
A Fury X is already at or just under 1070 speed at higher resolutions. This is 300MHz+ faster and performing pretty closely.

We already know it does not have a new form of TBR operating (as mentioned by Raja) which likely means the drivers are not optimised for other refinements, it's also using a driver from January with actually slightly regressed perf/clock performance to Fiji. They have obviously focused on stability as it's a pro card. (in before someone posts the blender crash article to distract from the message at hand). Do you seriously think that this is what they could do after all this time? Almost the same performance as Fury X? It's just a die shrink that took 2 years?




what uarch tricks exactly are you referring to that improve vastly the performance over Fiji?. you know, most (if not all) of the performance gains on polaris are due the higher clocks of those cards in comparison to older ones with same shader configuration, Tonga in example... the fixes they did to their GCN architecture are mostly to mitigate their weakness, so they take less performance hit for example with heavy geometry/polygon output effects such as Tessellation. is not that they gain big performance, it's just they loss less ;)

This has already been addressed above but they also use memory timings to boost performance 290x-390x. Not just uarch. You are using semantics too, they did 'reduce their weakness' aka increase efficiency. I guess nvidia also 'reduced their weaknesses' by simply clock bumping paxwell above all? Just ignore the other changes they made..
What AMD has advertised alone is not matching up to the figures we are getting. Do you think they'd open themselves up for a massive lawsuit so stupidly all of a sudden?
No, amd suxxx reeeeeeeeee /thread

NCU is GCN. The PR talk is that its always new and completely different, yet it turns out not to be. And Navi will be GCN as well. Call it what you want.

Have you ever considered that they are using re-purposed Fiji drivers, which take zero advantage of any improvements, by the simple fact that it's actually regressed backwards in clock/perf raio...?
But no, the sky is falling because you're all getting worked up over traditionally shitty launch drivers, on a product most of you would never own anyway, instead of making the same fuss about the shitty x299 FET cooling situation which is something people like you are more likely to shell out for, because Intel and my 15% higher FPS at 720p.
 
Were you using HDMI by any chance? Because by default the drivers set it to "Limited" on HDMI. Spend 5 seconds and select "Full" and you're good to go. If that is not the issue, more than likely you were looking for a problem and made one up. Which is fine, because you can use what you want. But if Nvidia was not capable of providing proper colors I'm sure [H] and the rest of the well respected reviewers would have noted it by now.

Yes, it was an issue and I was using Display Port. Please do not try to hide what actually exists as problems, Nvidia has this problem and choose not to fix it, that is just the way it is. Better folks know about it and choose what they want to do from there then hide the problem and have folks get frustrated when their card does not work correctly, or looks all washed out.
 
Do you think that something happened to the original vega design sometime last year when they kept on pushing back the release and they went to doing a die shrink on the fury x?
 
Do you think that something happened to the original vega design sometime last year when they kept on pushing back the release and they went to doing a die shrink on the fury x?
Im thinking after reading many posts.. and like suggested by others, vega fe might be vega 0.5- more fury x as a consequence. It is making more sense, going with a more seasoned architecture with less changes would allow faster more reliable implementation.
 
I'm pretty sure everything has said their gaming card is going to be ~1080 or a little faster, which seems to be what will be the case. Lets see that $399 sticker and everyone will buy 2.
 
I'm pretty sure everything has said their gaming card is going to be ~1080 or a little faster, which seems to be what will be the case. Lets see that $399 sticker and everyone will buy 2.

With a HBM2 memory, a ~550mm2 die, and a 300w power delivery / cooling system, there won't be any money to be made if it comes out at $399. After accounting for yields, shipping and packaging costs, AIB profit, e-tailer / retailer profits, and eventually RMA / warranty work, that leaves no room for margins. AMD's only real option would be to make every Vega die a professional or prosumer part.

Besides, all nvidia has to do is lower the price of the much cheaper GTX 1080 to $450, and the GTX 1070 to $300 and boom Vega is DOA. A $300 GTX 1070 which consumes half the power and is 75-80% as fast as Vega makes Vega looks terrible while a similarly performing part witha 100 less watts of heat dump into a user's room and case, and still has OC headroom, still looks more attractive than a $400 Vega at stock 1080 performance levels.
 
From the tear-down die looks to be ~527mm2.

How is that big with only 4096 SPs?

Polaris is only 232mm2 with 2304 SPs.

2x Polaris would be:

4608 SPs and 464mm2.
 
....................................................................
And let's stop and be rational here.
Do you really think this is actually indicative of the final performance numbers?
A Fury X is already at or just under 1070 speed at higher resolutions. This is 300MHz+ faster and performing pretty closely.

I always try to be rational ! ;)
As i said at my previous post, there is one fact as a given: AMD themselves have benchmarked the VEGA FE at certain games of their choice. Did they state anywhere, that these numbers are inaccurate ? No, there is not such statement (*from what i know at least ).
So, with this fact as a given, it's not my job to pretend that i'm AMD's mentalist. If they want to share more info i'll be glad to listen. If not, then i'll conclude my thoughts with the given facts that i have so far (*and already said which are the given facts so far). That's my way of thinking rational.!! :pompous:
 
With a HBM2 memory, a ~550mm2 die, and a 300w power delivery / cooling system, there won't be any money to be made if it comes out at $399. After accounting for yields, shipping and packaging costs, AIB profit, e-tailer / retailer profits, and eventually RMA / warranty work, that leaves no room for margins. AMD's only real option would be to make every Vega die a professional or prosumer part.

Besides, all nvidia has to do is lower the price of the much cheaper GTX 1080 to $450, and the GTX 1070 to $300 and boom Vega is DOA. A $300 GTX 1070 which consumes half the power and is 75-80% as fast as Vega makes Vega looks terrible while a similarly performing part witha 100 less watts of heat dump into a user's room and case, and still has OC headroom, still looks more attractive than a $400 Vega at stock 1080 performance levels.

I would expect RX Vega to be discontinued the instant GV106 launches. Its already hard to see any profit from the current competition (Aftermarket 1070).
 
I would expect RX Vega to be discontinued the instant GV106 launches. Its already hard to see any profit from the current competition (Aftermarket 1070).

I think Vega will have a very short life span in the consumer space regardless of GV106.
 
Isn't this card for for workstations?
And if so, why is the benchmarks all gaming?

People are interested in how Vega will perform in gaming. AMD is being hush hush, so we are left with waiting, and using FE for ballpark performance.
 
Isn't this card for for workstations?
And if so, why are the benchmarks all based on gaming? (n)
 
wait, are you talking about them letting pcworld do a subjective comparison of gameplay just because? thats nowhere near a benchmark. They asked to see it playing games and AMD obliged.


All these were done by AMD themselves. check these quotes from the linked article :

1 ) ..............AMD set up two PCs with identical everything....................
2) ...............AMD didn’t show us all this out of kindness................
 
With a HBM2 memory, a ~550mm2 die, and a 300w power delivery / cooling system, there won't be any money to be made if it comes out at $399. After accounting for yields, shipping and packaging costs, AIB profit, e-tailer / retailer profits, and eventually RMA / warranty work, that leaves no room for margins. AMD's only real option would be to make every Vega die a professional or prosumer part.

Besides, all nvidia has to do is lower the price of the much cheaper GTX 1080 to $450, and the GTX 1070 to $300 and boom Vega is DOA. A $300 GTX 1070 which consumes half the power and is 75-80% as fast as Vega makes Vega looks terrible while a similarly performing part witha 100 less watts of heat dump into a user's room and case, and still has OC headroom, still looks more attractive than a $400 Vega at stock 1080 performance levels.

Problem is Nvidia is selling all the 1070s they can and the prices are practically higher than launch currently. Mining has this idea screwed. Nvidia dropping prices to compete when they still are selling, would result in some massive kvetching from shareholders. Can you imagine our resident share price shills waking up to 30% less profits and share price drops one week? Reeeeeeeeeeeeee

But I do agree there is not much margin in there. But this is assuming it's still costing them 160USD for HBM which I highly doubt at their volumes and with more supply in channel now.

I always try to be rational ! ;)
As i said at my previous post, there is one fact as a given: AMD themselves have benchmarked the VEGA FE at certain games of their choice. Did they state anywhere, that these numbers are inaccurate ? No, there is not such statement (*from what i know at least ).
So, with this fact as a given, it's not my job to pretend that i'm AMD's mentalist. If they want to share more info i'll be glad to listen. If not, then i'll conclude my thoughts with the given facts that i have so far (*and already said which are the given facts so far). That's my way of thinking rational.!! :pompous:

Fair point. Sorry - the 'being rational' wasn't aimed directly at you nor anyone else, more just a 'lets stop and mull this over for a second, [H]'.
They also said gaming based cards would be faster and cheaper in the same vein. My problem is when we are clearly looking at old drivers on a multi-use card, then insinuating the gaming version will suck just as much. It's best if we leave the jury out until we have RX Vega whatever they are calling the gaming card and some more mature drivers. E.g. a month or so after launch..

I'll happily say RX Vega is shit if there isn't much difference, because I'm not a blind fanboy shilling no matter what for my favourite stock prices like some (e.g. the obvious intel/nvidia can do no wrong crowd). But I'm going to wait until we have better data on gaming performance with a gaming card, not January drivers with no TBR etc and a clock/perf regression vs Fiji.

GV106 will be 1070-1080 speed so it won't compete on performance if they get Vega going better. It will however use less power, which will be shilled to all hell even if there is a 20% performance delta in 4k. Of course the reverse doesn't apply, crickets on Intel being hot and far less efficient than Ryzen.
 
Anyone who uses this "Reeeeeeeeeeeeee" or any other current/past interwebs (see I can 2) lingos I never trust. Like that kid in high school who wore whatever was coolest. Which now is whatever "unique" means, and yet they all look the same...

Yeah, I'm old. Get off my driverway. We have those.
 
Back
Top