AdoredTV Discusses the Recent AMD Ryzen and Radeon 3000 Series Leaks

Status
Not open for further replies.
Damn, I am going to have to change my X370 to an X570 board. It all looks good though, with the higher clock speeds they should finally give Intel a run for their money. I wonder what IPC improvement they will bring ?
 
i went from the phenom II 940 3.8Ghz to a R5 1600(3.4Ghz stock), absolutely no regrets. the performance difference was insane.

glad to hear you are very happy ^.^

I know performance uplift from any Phenom II to any and all Ryzen is massive in pretty much every regard at similar or less power consumption, I have just been on fence due to $ issues and the cost to do that build, want to get the absolute "best" value for $ spent (especially living in Canada and our "canada tax" as it is)

kind of easy to see performance/price difference between the 1xxx and 2xxx chips (simple math to break down $ per core/$ to thread and $/performance difference type thing) still not easy choices though.

read a few folks that had some stutter issues and the like going from lowest end ryzen to the highest end, I know going with a good b450 or x470 is "best bet" cause will give the best possible support to 1xxx or 2xxx chips (without require bios update) but still pricing and "choice" is not exactly "easy" when talking potential thousands to make it happen.

recycle/reuse some things will drop to ~$1400 all told (up to 2700x) though on the cusp of new gpu coming out etc etc....I will stay with my 955 and 7870 "for now" probably the best bet, need to keep myself "focused" at least will have "possible" wide selection of chips at less cost then current (get rid of old stock type thing) 2600x seems "great option" as did the 1700 (if pricing was right) 1500x not so much when the 6 cores were very close pricing wise.
 
Ryzen's raw IPC is nearly on par with Intel's tick, tock, tock, tock, tock, etc. 14nm -lake architectures. The biggest hurdle has been the raw clockspeed difference between the two companies' chips, especially when running on only one or two cores.

Ryzen and Threadripper getting up to Intel's speeds is really going to result in a big change in market share.

Buckaroos

Banzai.
 
Mindshare is worth a lot. Especially if developers feel that AMD GPUs will have a higher adoption rate.

How much effort goes into testing games for compatibility and performance on Nvidia cards? I'm guessing a whole lot more than AMD.

If AMD floods the market with cheap high performing midrange cards that don't have special RTX features there less effort to develop for that than RTX.

Think about how good AMD works in Doom and Wolfenstein?

If AMD can get developers focusing on them instead of Nvidia their performance will look even better. Once engine developers are focused on Navi both PC and console, that's when I'd release a high end 2080ti class card.

Of course developer mindshare is going to take a lot to change. Higher adoption rate by end users. Technical and financial incentive from AMD.
I'd expect this whole generation is targeted at increasing adoption rates in preparation for nextgen consoles. Navi everywhere makes a compelling argument for less work porting.

"You want cheap performance, or a fancy raytracing gimmick?" The marketing writes itself.
 
Since these have separate chiplets, does that imply easier boost of 2x the cores? As in, previously we had a highly binned monolithic die that could hit 5.0 GHz on one core. Would two highly binned chiplets make it easier to have 2 core 5.0 boost, all else being equal?
 
Last edited:
So, my 4770 proc/board died 3 weeks ago.

Which, was in time to get the 2700x that was available on Black Friday, but now it had waited to blow up until after these came out.

I don't need anything bigger, but ARRRRGGGHGHGHGHGHGHG ;)

same time, I did NOT upgrade the GTX770 (since I'm just now getting to games that are limited, gotta love dad-gaming-lag-time), so now I am going to wait, but Big Green pissed me off with the 20XX release hardcore. I am not paying $600 for a card that can run games at 4k well enough.
 
Apologies if this is brought up later in the thread, but what you are missing here is that Vega is expensive to make, largely because of the HBM2 requirement and die size. Vega dies are pretty big. AMD could not price the Vega series much lower than they did at release and still make money on it. By all reported rumors, Navi is actually really cheap to make, both because it will be on 7nm and because it uses the (much) cheaper GDDR6. It is very possible they could price it in the sub-$300 range and still make good money on it.

Ok, and by the same token, RTX is expensive to make, which is part of the pricing picture, but the main part is what the company can get. Nvidia saw that people were willing to pay $1000 for a Titan and that was pretty much the end for sane pricing at the high end of the video card market. In the case of the GTX 1080 level market segment, If the market price is $400, and you want to make inroads, $350 is aggressive. $250 is just stupid and will kill the ability for you to actually make money because it lowers the expectations of the consumer. If Navi costs that much to make on the production side (notwithstanding R&D costs and other inputs beyond the actual production), then the only possible business reason AMD could have would be to flood the market specifically to take share and nothing more. While that can be plausible because they don’t have a whole lot of share in either the CPU or GPU markets, but to go THAT aggressive I think is akin to giving the product away, and I just don’t see the business case for doing so. I would contend that to go from $350 to $250 in that scenario would do nothing to expand profitability, and do everything to kill the market price making it more difficult to drive margins as they capture market share.

As it stands, both Intel and Nvidia have a gross profit margin close to 63%, which is double AMD’s. That’s the strength market leadership can bring, but AMD also needs to get their profit margins higher at some point if they want to be competitive in the long run. They’re way behind Nvidia in video technology, and although they have a CPU advantage over Intel at the present, Jim Keller now works for Intel with a significantly higher R&D budget to work with. AMD needs to make hay now if they can do it.
 
So, my 4770 proc/board died 3 weeks ago.

Which, was in time to get the 2700x that was available on Black Friday, but now it had waited to blow up until after these came out.

I don't need anything bigger, but ARRRRGGGHGHGHGHGHGHG ;)

same time, I did NOT upgrade the GTX770 (since I'm just now getting to games that are limited, gotta love dad-gaming-lag-time), so now I am going to wait, but Big Green pissed me off with the 20XX release hardcore. I am not paying $600 for a card that can run games at 4k well enough.

I wouldn't worry about it too much. Unlike what you generally find with Intel stuff, multi-generation chip upgrades on AMD are a thing. If a 3000-series chip comes out that you find really compelling, just sell or re-purpose the 2700X. (Just make sure you update that mainboard firmware first :))
 
There’s no way anyone with a shred of business sense would create a card that can beat a GTX 1080 and sell it for such a huge discount. Small discount to make it inticing? Sure. But that high of a delta? I mean we all can dream, but it would be completely stupid to do from a business standpoint.

Given the hype we were fed for Polaris and Vega, two cards I was interested in for upgrading my R9 280X only to be completely disappointed when they actually launched, I would file this under “believe it when I see it”.

The only reason I can see them pricing them that low is if is somehow below Nvidia's cost to product, package & put them on shelves. In that case even if the profit margin is tiny for AMD they'd get market share. Market share = advertising in the long run. Add in Free Sync monitors which are cheap and a peripheral that most keep for many years if they gain a lot of market share with short term profits it will likely help them more in the long run. If Nvidia can lower the RTX prices to be similar and still turn a profit then they'd have a problem. In which case, $350 or so for RTX 2070 level cards makes sense.
 
Ok, and by the same token, RTX is expensive to make, which is part of the pricing picture, but the main part is what the company can get. Nvidia saw that people were willing to pay $1000 for a Titan and that was pretty much the end for sane pricing at the high end of the video card market. In the case of the GTX 1080 level market segment, If the market price is $400, and you want to make inroads, $350 is aggressive. $250 is just stupid and will kill the ability for you to actually make money because it lowers the expectations of the consumer. If Navi costs that much to make on the production side (notwithstanding R&D costs and other inputs beyond the actual production), then the only possible business reason AMD could have would be to flood the market specifically to take share and nothing more. While that can be plausible because they don’t have a whole lot of share in either the CPU or GPU markets, but to go THAT aggressive I think is akin to giving the product away, and I just don’t see the business case for doing so. I would contend that to go from $350 to $250 in that scenario would do nothing to expand profitability, and do everything to kill the market price making it more difficult to drive margins as they capture market share.

As it stands, both Intel and Nvidia have a gross profit margin close to 63%, which is double AMD’s. That’s the strength market leadership can bring, but AMD also needs to get their profit margins higher at some point if they want to be competitive in the long run. They’re way behind Nvidia in video technology, and although they have a CPU advantage over Intel at the present, Jim Keller now works for Intel with a significantly higher R&D budget to work with. AMD needs to make hay now if they can do it.

I agree to a point, but historically AMD Graphics cards have pushed higher performance down-market at the same/similar price points in relatively short order. The RX480 8G @MSRP ($200) was delivering R9 290X performance with twice the RAM capacity a hair less than 3 years later. The MSRP of the R9 290X was $549. MSRP of GTX 1080 is still close to $500 (though they can be frequently found for less, of course) and this card has been out for over 2 years. By the time these Navi units release, the $250 theorized price point isn't really a stretch compared to AMD's past practice.

One thing that you have to consider is that AMD has had superior product to nVidia several times, just not very recently. nVidia always managed to sell more cards regardless - even when the nVidia parts were significantly behind. AMD knows this, so the only real way they have to increase market share is to price the cards aggressively if they have the room to do so. This is what they have done vs. Intel and for a variety of reasons it is paying off in spades. We'll see how well the same strategy works against nVidia.
 
The only reason I can see them pricing them that low is if is somehow below Nvidia's cost to product, package & put them on shelves. In that case even if the profit margin is tiny for AMD they'd get market share. Market share = advertising in the long run. Add in Free Sync monitors which are cheap and a peripheral that most keep for many years if they gain a lot of market share with short term profits it will likely help them more in the long run. If Nvidia can lower the RTX prices to be similar and still turn a profit then they'd have a problem. In which case, $350 or so for RTX 2070 level cards makes sense.

It wouldn't surprise me if the 2 Navi chips are smaller than 150mm2, which would make them fairly cheap to manufacture.
 
I could see some of the CPU rumors coming to fruition but with higher pricing but the GPU side does not make sense from a pricing and product segmentation point of view.

Because of the time and cost of 7nm finfet development, AMD at most is going to release two GPU's in 2019 similarly how they needed to scale pack with their first 14nm products.

Compare the RX 3070 and RX 3080 and you will notice something weird. That is they are attacking nearly the same segment with two chips because there is around a 25% difference in performance between an RX 3070 and RX 3080. They are attacking a segment of performance slightly larger than the difference in performance between an Rx 470 and Rx 480 with two chips. That's doesn't make sense because that's poor segmentation and a waste of resources. This is not something a competent CEO(and Lisa Su is very competent) would do considering AMD's resource situation and need to address as much of the market as possible with as small a product stack as possible.

What AMD should be doing, and what both Nvidia and AMD have typically done in the past, is target the rx 3070 and rx 3080 class of GPU with a single chip with a much smaller chip at the 129 dollar price point to serve the budget/mainstream laptop market which gives it more flexibility with pricing because cards in this segment have to be very cheap because of their high volume with system builders.

They did this with RX 460 and the rx 470/480 and to a similar extent pitcairns and cape verde. Nvidia deviated with the RTX 2070 and 2080 but they are rich in resources and have a $200 pricing difference between the card meaning there will be less product cannibalization going on.

But AMD is in a situation where it has to be as efficient as possible with its manpower. This means cards with clear segmentation in terms of performance so your throwing away less die space and optimum performance per mm2 which increases margins for the company.

With Navi 12 and Navi 10 sharing a 256 bit bus(and the same ROPS), the primary difference is going to be shaders which will only result in 10-15% savings in die space considering the 25% difference in performance since AMD shaders don't take up that much space.

As this illustrates, this is highly inefficient because your attacking the 129 dollar market with a card with almost the same GPU die as the 250 dollar market. This makes the rumors very suspect.

In addition, Vega 20 is already made on 7nm and has performance along the lines of Vega 64 + 15 percent because of the 20% clock advantage(which would not all translate into performance). The power consumption of this card is 300 watts which represent a 20% increase in performance per watt vs Vega 64.

Although Vega 20 has double precision which takes away some efficiency, it also has HBM2 which decreases power consumption. This to me makes it seem impossible for AMD to achieve Vega 64 +15% at 150 watts. Bigger wider chips have better efficiencies than smaller ones because they can be clocked lower to achieve the same performance because they can use less voltage because they can clock lower for the same performance. As a result, because power consumption is primarily a function of clocks and voltage they the potential for greater efficiency. Because Navi is a smaller chip it will need higher clocks to achieve this level of performance. So where is this doubling in efficiency coming from if vega 64 to Vega 20 only yielded 20% better performance per watt?

Although Navi might be a slightly different architecture it is still GCN and most of the increases in IPC have already been obtained as Vega's lackluster IPC has shown.

I agree to a point, but historically AMD Graphics cards have pushed higher performance down-market at the same/similar price points in relatively short order. The RX480 8G @MSRP ($200) was delivering R9 290X performance with twice the RAM capacity a hair less than 3 years later. The MSRP of the R9 290X was $549. MSRP of GTX 1080 is still close to $500 (though they can be frequently found for less, of course) and this card has been out for over 2 years. By the time these Navi units release, the $250 theorized price point isn't really a stretch compared to AMD's past practice.

One thing that you have to consider is that AMD has had superior product to nVidia several times, just not very recently. nVidia always managed to sell more cards regardless - even when the nVidia parts were significantly behind. AMD knows this, so the only real way they have to increase market share is to price the cards aggressively if they have the room to do so. This is what they have done vs. Intel and for a variety of reasons it is paying off in spades. We'll see how well the same strategy works against nVidia.

AMD/ATI markham has made superior products to Nvidia. This team is gone after the mass firings for AMD to stay afloat.

The Shanghai branch first product was the r9 285 Tonga through Vega and does not have a good track record. There has not been a product from this team that has met peoples expectations which is why I am not getting my hopes up for this product.
 
Last edited:
In the case of the GTX 1080 level market segment, If the market price is $400, and you want to make inroads, $350 is aggressive. $250 is just stupid and will kill the ability for you to actually make money because it lowers the expectations of the consumer.

Customers wouldn't receive the benefit of a hypothetical $250 MSRP 1080-equivalent, however. AIB's, resellers and flippers with bots would see to that. And it would manifest here with posts complaining about "gouging".

All that an underpriced GPU would succeed in doing is giving big bags of money away to middlemen, leaving pissed off AMD shareholders, which would be out of character for Lisa Su who has been pretty measured and conservative in her decisions. Basically it wouldn't happen.
 
Last edited:
Doubtful, but it would be really cool to see triple channel memory for DDR4 for 2 reasons:

- 24 GB seems like a great amount for future proofing current power users (this is ignoring the "I am running 4 VM's while video editing and transcoding" uber class)
- The extra bandwidth would really help the APUs.

There is always the TR quad channels, but now you are talking expensive boards that need at least 32 GB of Ram as well as being huge making smaller form factors impossible.

The buy in on TR boards is a big issue for a lot of mid level content creators I work with. They're happy to buy the 2700x all day long but really need to do the math to determine if TR is actually going to benefit them financially. Unless your job depends on the speed in which your content is created, I have a hard time finding people that think ~$500-$1000 extra is worthwhile. 2700x is realistically a whole lot of chip for the system buy in price.
 
does this remind anyone else of the HD3870 and HD4870? Not targeting the super highend but still being competitive at an absolutely insane price.
 
I agree to a point, but historically AMD Graphics cards have pushed higher performance down-market at the same/similar price points in relatively short order. The RX480 8G @MSRP ($200) was delivering R9 290X performance with twice the RAM capacity a hair less than 3 years later. The MSRP of the R9 290X was $549. MSRP of GTX 1080 is still close to $500 (though they can be frequently found for less, of course) and this card has been out for over 2 years. By the time these Navi units release, the $250 theorized price point isn't really a stretch compared to AMD's past practice.

One thing that you have to consider is that AMD has had superior product to nVidia several times, just not very recently. nVidia always managed to sell more cards regardless - even when the nVidia parts were significantly behind. AMD knows this, so the only real way they have to increase market share is to price the cards aggressively if they have the room to do so. This is what they have done vs. Intel and for a variety of reasons it is paying off in spades. We'll see how well the same strategy works against nVidia.

Oh I agree, AMD will discount. They don’t have a choice. They won’t discount it by 40% though, at least I don’t see why they would even with the delta on market share. It’ll kill their profit potential. 10-20% would be more realistic if they want to grab share.

If they do want to give away GTX 1080 performance for $250 though, I’ll take two, thanks! Lol
 
This makes me moist.
4472066.jpg
 
AMD aren't going for raw profit, they're going for market share. Lisa has shown again and again that she plays the long game, not the short gain. By making it basically impossible to justify the Nvidia alternative, AMD just might sell 1 for every 4 Nvidia cards.
 
AMD aren't going for raw profit, they're going for market share. Lisa has shown again and again that she plays the long game, not the short gain. By making it basically impossible to justify the Nvidia alternative, AMD just might sell 1 for every 4 Nvidia cards.

She's smart. AMD is the Tortoise, Nvidia is the Hare. Or she's basically in cahoots where her uncle "You make the high end", "We'll make the mid ranged" & keep all the money in the family :p
 
Last edited:
Oh I agree, AMD will discount. They don’t have a choice. They won’t discount it by 40% though, at least I don’t see why they would even with the delta on market share. It’ll kill their profit potential. 10-20% would be more realistic if they want to grab share.

If they do want to give away GTX 1080 performance for $250 though, I’ll take two, thanks! Lol

Yeah, again the problem with the idea of AMD setting MSRP lower than necessary is resellers and flippers will balloon it back to the pricepoint of the comparable Nvidia part -- demand would be very high. They'd have to produce tons of them to create enough of a wave. Maybe there's some merit to the idea of AMD selling direct with a quantity limit per address to get buzz going initially.

If only they could start pumping these out today, because Nvidia has definitely left an opening in the market as resale values of Pascal cards have ballooned in revolt to the 2000-series pricing.
 
Last edited:
She's smart. AMD is the Tortoise, Nvidia is the Hair. Or she's basically in cohoots where her uncle "You make the high end", "We'll make the mid ranged" & keep all the money in the family :p

I think you are getting Nvidia "Hair" Works mixed up with Hare. And it's cahoots. Sorry, having a grammar/spelling Nazi moment here. :p
 
Fine, I don't have a Navi GPU to test. But given the current generations performance, that 15 (or even 20) CU APU should play most mainstream games at 1440 if you didn't insist on maxing out graphics settings. I know lots of people that just install the game, set to their native rez (if it's even needed) and play. They only mess with settings if it feels slow. Why do you think games started auto-detecting settings rather then defaulting to low-medium? If you loaded Overwatch with that setup right now, with a 1440 display, it would likely give you 1440 @ high-med settings. Most peeps would play the crap out of it without complaint.

Sorry, should have thrown in a 'mainstream' in there somewhere so enthusiasts wouldn't get butthurt.
Don't need a Navi to test. It will be a nice and efficient and will work just fine as does the current Vega APUs. BUT it will still be sharing the SAME DDR4 system RAM as the current APUs. Memory bandwidth will still be very limited.
 
Don't need a Navi to test. It will be a nice and efficient and will work just fine as does the current Vega APUs. BUT it will still be sharing the SAME DDR4 system RAM as the current APUs. Memory bandwidth will still be very limited.

You're looking at the same core configuration as an RX560, So even with quadruple the bandwidth, I doubt it would be able to run 1440p. Even then: unless the I/O chip can work miracles, it's going to be starved for bandwidth.
 
You're looking at the same core configuration as an RX560, So even with quadruple the bandwidth, I doubt it would be able to run 1440p. Even then: unless the I/O chip can work miracles, it's going to be starved for bandwidth.

I am not saying I am optimistic that the GPU on chiplet desgin will bring us 1440 capable integrated gaming.

However.... we can't rule it out at all at this point for one very cool reason.

The chiplet design means AMD is moving the RAM controller off the actual "CPU" die. There is no reason they can't access a second pool of memory. I know it sounds a bit insane. But there isn't really anything stopping AMD from adding a graphics ram controller to the controller chip on the CPU package... and accessing the ram via PCIE.

If AMD wanted to offer a very fast "APU". They slot 2 Ryzen CPU chiplets, and 2 Navi 10 chiplets... which would give the chip as many CUs as the supposed 2070 performing part for $250 bucks. They then drop a 8GB graphics ram card into a PCI or m.2 slot... and it runs at very close (perhaps even slightly faster) then the stand alone card.

If I was a betting man... that is very much what AMD is building. Its designed for the PS5 after all. My bet would be PS5 will feature 8GB of RAM. and 8GB of GDDR6 with one Chiplet CPU... running 2 ryzens and 2 navies, with the chiplet host controller taking care of routing to the 2 pools of ram. 2, 4 core ryzens would give them 8/16 cores to talk about(I can't imagine they go 6 or 8 core unless they go 6 for yield purpsoses 12/24 would be interesting)... and 2 navis operating as one card via the chip controller would allow them to use darn near 100% of their silicon if they don't try and go crazy clock, making for a pretty cost effective system in terms of bang for the buck for Sony. In terms of bandwidth the chiplet design allows them to do something like double navi and have the controller chip split the work and bandwidth to dual channel GDDR. (its possible it could actually be a monster in terms of performance finally delivering on the promise of speed bumps from integrating CPU and GPU)

If that is the case it may make for some interesting low power mini PCs in a year or so.
 
Last edited:
They then drop a 8GB graphics ram card into a PCI or m.2 slot...

If only that were possible.
Running GPU RAM over the PCI bus would be ULTRA slow. RAM needs direct connections to GPU because we're talking 256+GB/sec of bandwidth. PCI 4.0 x16 has a max of 64GB/sec of bandwidth, and don't even get me started on the latency.........
 
Yeah, again the problem with the idea of AMD setting MSRP lower than necessary is resellers and flippers will balloon it back to the pricepoint of the comparable Nvidia part -- demand would be very high. They'd have to produce tons of them to create enough of a wave. Maybe there's some merit to the idea of AMD selling direct with a quantity limit per address to get buzz going initially.

If only they could start pumping these out today, because Nvidia has definitely left an opening in the market as resale values of Pascal cards have ballooned in revolt to the 2000-series pricing.

The fact that Pascal prices have jumped blows my mind. I picked up a GTX 1070 Ti last month on clearance and I’m glad I did, because I can’t find another deal like that one even a few weeks after purchase.
 
IPC will increase slightly - say 10-15% ...
You think that's "slightly"!?
Given that it's just going from Zen to Zen2 it's a major increase!
Has Intel even been able to get that far in the last five or more generations of Core?

Does this mean more RAM channels?
Most probably not.

Since these have separate chiplets, does that imply easier boost of 2x the cores? ...
Interesting thought...
Technically I think you're on to something.
The limiting factor would be power/heat, so paired with sufficient cooling why not? (For the CPUs with two chiplets.)

Yeah, again the problem with the idea of AMD setting MSRP lower than necessary is resellers and flippers will balloon it back to the pricepoint of the comparable Nvidia part -- demand would be very high. They'd have to produce tons of them to create enough of a wave. ...
I agree. If the rumored power/performance is anywhere true miners will likely queue up to buy all these graphics cards since they'll make it profitable again. Hopefully Crypto currencies won't go up enough to re-start the craze.
Increased selling price and purchase limitations could "help" making these cards accessible to us other.
 
If only that were possible.
Running GPU RAM over the PCI bus would be ULTRA slow. RAM needs direct connections to GPU because we're talking 256+GB/sec of bandwidth. PCI 4.0 x16 has a max of 64GB/sec of bandwidth, and don't even get me started on the latency.........

Well accept GDDR6 which is the memory used by 2000 series NV cards maxes out at 16Gbps and 2080tis run at 14Gbps. Theoretical top ends of 600Gbps on 384bit buses are nice marketing jargon but not a actual real operating speed.

PCIE3 linkbw is 8gbs its total 16x bw is is 32Gbs
PCIE4 linkbw is 16Gbs its total 16x bw is 64Gbs

The planned push to PCIE5 brings things to 32 and 128 5 is slated for 2019 and could well find its way into PS5 depending on how ambitious Sony and AMD are.

Latency isn't even really a major issue anymore. AMD was already using infinity fabric to reduce the issue of latency. 2400g latency is almost identical to cards like 1030gt. Granted that isn't high end. Just saying current Ryzen G processors have around the same latency as low end dedicate cards. Building a dedicated GPU memory channel into an IO controller in Zen 2 is only going to improve that.

I'm not saying AMD is going to have a G Ryzen2 with 2080 performance.... of course not. But 2-3x better performance the the current G parts is pretty doable if we are talking about doubling (perhaps 4x with pcie5) the lane speed. As well as using a dedicated GDDR memory controller (instead of DDR4). 2080ti performance no of course not.... but 1060/2060 level performance, I don't think that is beyond what is capable... its only a couple steps up from where things are now.
 
AdoredTV also said the RX480 would compete with a 1080 before it launched and it could barely beat a 1060 in a few DX12 titles, lol. This guy loves to over hype AMD stuff.

Might be good performance, but $250 for 2070 performance? Doesn't make sense. Why would they price it at $250, when they could easily price it at $300-$350 and it would still be a great deal, considering the 2070 goes for $550+?
 
AdoredTV also said the RX480 would compete with a 1080 before it launched and it could barely beat a 1060 in a few DX12 titles, lol. This guy loves to over hype AMD stuff.

Might be good performance, but $250 for 2070 performance? Doesn't make sense. Why would they price it at $250, when they could easily price it at $300-$350 and it would still be a great deal, considering the 2070 goes for $550+?

They are in competition mode. Sleazy Nvidia is milking the fan base and thinking the crypto price wave will continue. The only bonus for end users is a free game of Space Invaders.
if AMD can make profit at that price, they will be in more systems and that's their goal.
I'm rocking a 1070 now but at those post crypto prices, I'll build a new AMD crossfire system next year.
 
They are in competition mode. Sleazy Nvidia is milking the fan base and thinking the crypto price wave will continue. The only bonus for end users is a free game of Space Invaders.
if AMD can make profit at that price, they will be in more systems and that's their goal.
I'm rocking a 1070 now but at those post crypto prices, I'll build a new AMD crossfire system next year.

Even if were true, you'll never get them at $250. Retailers will take advantage of increased demand like they always do and increase prices. Like when Amazon was selling gtx 1060 for $400 during the mining craze and the msrp was $260.
 
Even if were true, you'll never get them at $250. Retailers will take advantage of increased demand like they always do and increase prices. Like when Amazon was selling gtx 1060 for $400 during the mining craze and the msrp was $260.

This is true. Unless, AMD sold them directly from their site.
 
This is true. Unless, AMD sold them directly from their site.
It'd still be true, they'd just be sold out on their site and you'd have to buy them on ebay/amazon instead. Well, assuming the scalpers don't ignore their site for some reason.

If it were me, I'd let the scalpers buy until they go broke and wait for stock to come back at msrp. Unfortunately, I'm not everybody, so scalpers gonna make bank.
 
Don't be surprised if we don't see consumer 7nm AMD Ryzen 3000 CPU's and X570 motherboards until Q3 2019. AMD has already made it clear that the first 7nm batch is not for consumers. CES 2019 would be too soon for such a release. Computex in June 2019 at the earliest.

AMD has been quite vague about a specific release date for consumer release, and based on AMD's release history I'm remembering the Vega GPU that was announced at CES 2017 but didn't get released until Q3 2017 - like 9 months late.

AMD has been clear that the first round of 7nm AMD CPU's will *NOT* be for consumers. CES seems to early for such a release - I hope I'm wrong but again, based on AMD's release history it could be announced at CES but still not be released for 9 months.

And we won't see PCIe 4.0 on consumer boards until 2020. Btw, PCIe 4.0 will be for consumers while PCIe 5.0 will not be for consumers. PCIe 5.0 will only be for AI and science, medical research etc.


 
Last edited:
And we won't see PCIe 4.0 on consumer boards until 2020. Btw, PCIe 4.0 will be for consumers while PCIe 5.0 will not be for consumers. PCIe 5.0 will only be for AI and science, medical research etc.

Sure, perhaps on the initial roll-out, but give it a few years.

Similar statement comes to mind:

"There is no reason for any individual to have a computer in his home." Digital Equipment Corp. founder Ken Olsen

"640k ought to be enough for anybody." - Bill Gates
 
And we won't see PCIe 4.0 on consumer boards until 2020. Btw, PCIe 4.0 will be for consumers while PCIe 5.0 will not be for consumers. PCIe 5.0 will only be for AI and science, medical research etc.

5 will be on consumer boards by early 2020.

The only real question is will PCIE4 find its way into that many products... as everyone is expecting the push to 5 to happen pretty quickly. PCIE4 boards may only really be on the market for a year or two.
 
Don't be surprised if we don't see consumer 7nm AMD Ryzen 3000 CPU's and X570 motherboards until Q3 2019. AMD has already made it clear that the first 7nm batch is not for consumers. CES 2019 would be too soon for such a release. Computex in June 2019 at the earliest.

AMD has been quite vague about a specific release date for consumer release, and based on AMD's release history I'm remembering the Vega GPU that was announced at CES 2017 but didn't get released until Q3 2017 - like 9 months late.

AMD has been clear that the first round of 7nm AMD CPU's will *NOT* be for consumers. CES seems to early for such a release - I hope I'm wrong but again, based on AMD's release history it could be announced at CES but still not be released for 9 months.

And we won't see PCIe 4.0 on consumer boards until 2020. Btw, PCIe 4.0 will be for consumers while PCIe 5.0 will not be for consumers. PCIe 5.0 will only be for AI and science, medical research etc.




This article says almost word for word what is said in the video you posted.

https://www.techspot.com/news/77769-our-take-amd-zen-2-cpu-navi-gpu.html
 
Status
Not open for further replies.
Back
Top