AMD Press Conference at Computex

I'd rather have more than less. No clue what the future holds or how those lanes could be repurposed, but I'll take more for less money than Intel's less for more money plan

"More" is great if you have a reason for it. How about a motherboard with 20 PCI-e slots? Or one with 40 USB ports? That's "more" with no purpose. I'm trying to think of WHAT it could be used for. If it something that's 3-4 years down the road, then it's irrelevant, since I'll have a whole new MB/CPU in 3-4 years too.
 
Jebuz. The only thing I can hope for is that AMD is doing a fake out move to keep nvidia in the dark. But it doesn't look good.
The 30Hz probably had to due with the projection system they were using. Not that big of a deal, but I still do not understand AMD's marketing. Launch Instinct first, then put RX Vega with Siggraph? Show RX Vega's running in CF? That scares me a bit.
 
Yeah it's not a big deal, most projectors run 30hz anyway, the reason I bring this up is because people have been scrutinizing stills from the presentation to try and gauge how many frames were drawn in one refresh to guesstimate the framerate
That is not going to happen.
 
Yeah it's not a big deal, most projectors run 30hz anyway, the reason I bring this up is because people have been scrutinizing stills from the presentation to try and gauge how many frames were drawn in one refresh to guesstimate the framerate

This is how I know you're talking about really fucking retarded people.
 
  • Like
Reactions: N4CR
like this
This is how I know you're talking about really fucking retarded people.

Not really, each tear represents the gpu reading a partially written frame from the buffer , so you can at least say that the gpu render x times as many frames in one screen refresh
 
Not really, each tear represents the gpu reading a partially written frame from the buffer , so you can at least say that the gpu render x times as many frames in one screen refresh

This approach doesn't make sense to me but you're right, probably I am the stupid one
 
This approach doesn't make sense to me but you're right, probably I am the stupid one

Its far from accurate, but it does give you a very rough estimate of the framerate, you could count the tears frame by frame and get an idea of it lol.


Your could also just wait a month :p
 
Most modern projectors run 60hz

So if vsync was off you'd see tearing above 60hz

But unless you know the actual projector that's just a guess. Who knows if the AV guys at the conference center fine tuned anything.
 
Don't get me wrong, I'm excited about the products, but as you mentioned...



... was attached to just about every announcement.
Not true unless you only count the consumer tier products:
  • EPYC - server chip, release date June 20
  • Vega Frontier Edition - pro GPU, release date June 27
Both less than a month away. The consumer products that might conceivably be part of a gaming rig (I'm counting Threadripper for high end mixed use) were further out and more nebulous... Threadripper in "summer" so months away, Rx Vega they only announced they will announce more in two months.

(Ok I just noticed it was the consumer parts inc. APU's originally mentioned)
 
"More" is great if you have a reason for it. How about a motherboard with 20 PCI-e slots? Or one with 40 USB ports? That's "more" with no purpose. I'm trying to think of WHAT it could be used for. If it something that's 3-4 years down the road, then it's irrelevant, since I'll have a whole new MB/CPU in 3-4 years too.

Yes but everything you quoted takes up physical space. This is a CPU design we're talking about, it would take up the same amount of space with 12 lanes. You're arguing fuits and vegetables in a meat market.

. Intel makes CPUs. AMD makes CPUs. Intel gives you less cores less lanes for more money. AMD gives you more cores more lanes for less money. Your argument makes zero sense. Shit, if you're just going to say more with no current use is dumb, why are you commenting on a thread about a 16 CORE 32 THREAD CPU!?!? This is for people that want more, and who knows, maybe there are people out there looking at these high end chips going hmm I want that bang for the buck and they can use it. Hell, based on your logic I'd love to see your current setup. If you have a single port or slot open on your motherboard I would like to ask you why you got something you may not use. What about your vehicle? Does it go over 60 MPH? Have you ever maxed out it's hauling or towing capacity? If not, why do you have it?

And just because you upgrade your system every 3 years doesn't mean everyone who buys these will. Hell we still have a lot of people on these boards running hex core 56XX series chips, because they needed more than current setups offer.

I used to have an SR2. (actually, 2 of them) Obviously I'm in the market for more. If someone is looking for higher end parts I can't think of a single scenario where more for less is a bad thing
 
How about a motherboard with 20 PCI-e slots? Or one with 40 USB ports? That's "more" with no purpose.
I would actually love a motherboard with more USB ports. I find I have them all filled within weeks. Once you have a mouse, keyboard, gamepad, headphone, mic, external HDD, VR headset, wacom tablet, 3d glasses, one free port to charge the vape, lol, etc. they are all gone (and I think having to buy a hub is kind of make-shift).
 
"More" is great if you have a reason for it. How about a motherboard with 20 PCI-e slots? Or one with 40 USB ports? That's "more" with no purpose. I'm trying to think of WHAT it could be used for. If it something that's 3-4 years down the road, then it's irrelevant, since I'll have a whole new MB/CPU in 3-4 years too.

For somebody like the NSA or Google being able to run that many storage devices would be a gold mine of productivity and relatively low power usage.
 
Most modern projectors run 60hz

So if vsync was off you'd see tearing above 60hz

But unless you know the actual projector that's just a guess. Who knows if the AV guys at the conference center fine tuned anything.

You can get tearing even if the frame rates are well inside the hz range of the monitor.
 
My brother will be really interested in the Epyc chip for professional desktop use. He runs huge Excel models.
 
My brother will be really interested in the Epyc chip for professional desktop use. He runs huge Excel models.
Very little of excel is parallel task. It's very linear in nature because of dependency of cells.

We have excel sheets in excess of 100 megs of data on disk. And it's all largely dependent data and dynamically generated data based on a set of input parameters. However it doesn't peg the processor.
 
You can get tearing even if the frame rates are well inside the hz range of the monitor.

I agree. 45Hz would result in a tear every other frame on a 30Hz display in the form of:
tear
No tear
tear
No tear
....etc etc

But it is possible to get an approximatation of frame rate based on where the tearing occurs on each frame. We did something similar to resonance frequencies using a strobe light slowing it down and speeding it up till the sand on the vibrating surface was stationary. We would then check the strobe for the frequency to determine the resonance. You can do the same with frame rates....to a certain extent. You could figure the value if the refresh rate didn't vary too much. But it could also be a multiple of that. ie: 24, 48, 96, 192 etc....
 
Yes but everything you quoted takes up physical space. This is a CPU design we're talking about, it would take up the same amount of space with 12 lanes. You're arguing fuits and vegetables in a meat market.

. Intel makes CPUs. AMD makes CPUs. Intel gives you less cores less lanes for more money. AMD gives you more cores more lanes for less money. Your argument makes zero sense. Shit, if you're just going to say more with no current use is dumb, why are you commenting on a thread about a 16 CORE 32 THREAD CPU!?!? This is for people that want more, and who knows, maybe there are people out there looking at these high end chips going hmm I want that bang for the buck and they can use it. Hell, based on your logic I'd love to see your current setup. If you have a single port or slot open on your motherboard I would like to ask you why you got something you may not use. What about your vehicle? Does it go over 60 MPH? Have you ever maxed out it's hauling or towing capacity? If not, why do you have it?

And just because you upgrade your system every 3 years doesn't mean everyone who buys these will. Hell we still have a lot of people on these boards running hex core 56XX series chips, because they needed more than current setups offer.

I used to have an SR2. (actually, 2 of them) Obviously I'm in the market for more. If someone is looking for higher end parts I can't think of a single scenario where more for less is a bad thing

PCI lanes take up physical space on a MB, because they require more pins/pads and more MB traces to run. They also make the CPU and MB more complex, and therefore more expensive. So it's a question of what value to the target audience is this feature providing for the cost?

And lets not get into personal attacks. I'm not saying that having some extra expansion space isn't a good idea. I'm asking how much is too much? If you want to use the car example, does it make sense for a single guy to buy a 15 passenger van? You might want more than a 2-seater just in case you have some friends along, but a huge van would be silly. The same case here, if your typical "power user" will peak out at 20-30 lanes, why make EVERY CPU with 64 lanes? There's a reason to have some future expansion capability. But there's a limit to how much you're actually going to need or want. If you can think of a use-case that needs 64 lanes in any kind of "home user" scenario, I'd be interested in the details.

I would actually love a motherboard with more USB ports. I find I have them all filled within weeks. Once you have a mouse, keyboard, gamepad, headphone, mic, external HDD, VR headset, wacom tablet, 3d glasses, one free port to charge the vape, lol, etc. they are all gone (and I think having to buy a hub is kind of make-shift).

Well, good USB hubs are made for exactly that reason. The physical space limitations of the MB I/O port area and the size of PCI slots limit how many ports you can actually add to any system.

For somebody like the NSA or Google being able to run that many storage devices would be a gold mine of productivity and relatively low power usage.

If you're looking for large-scale storage like that, you're looking for one of two things. Either blistering fast and huge cost, where you're talking multiple sever-grade CPUs, or just massive quantity, in which you're probably looking for large enterprise HDDs, not the high cost/GB of m.2 SSDs.
 
If you're looking for large-scale storage like that, you're looking for one of two things. Either blistering fast and huge cost, where you're talking multiple sever-grade CPUs, or just massive quantity, in which you're probably looking for large enterprise HDDs, not the high cost/GB of m.2 SSDs.

It is my understanding all the major players are switching to solid state because of low power usage, low heat output, and low random access times when searching for data. And as the data is relatively static, read/write burnout rates are low.
 
I would think if the enthusiast GPU was going to even remotely challenge nVidia's GTX 1080 stack, AMD would be all over it.

I am getting the feeling that this Vega release is going to be disappoint.

The Ryzen stuff is somewhat compelling, but Brent's latest gaming review throws a big bucket of cold water on that, too.

I'd like to think Vega was going to be exciting, but I'm just not feeling that.....
 
It is my understanding all the major players are switching to solid state because of low power usage, low heat output, and low random access times when searching for data. And as the data is relatively static, read/write burnout rates are low.

SSD yes, m.2 SSDs, not yet. There's just no reason for the extra performance and higher costs. There's a lot of SSD caching right now, rather than switching to all-SSD. The benefits are all on side of SSDs now, EXCEPT cost. So anyone that needs LOTS of storage is still going with HDDs. The only benefit M.2/NVMe provides over SATA/SAS is really performance, and that's just not needed in most large-storage cases.
 
  • Like
Reactions: N4CR
like this
Very little of excel is parallel task.

This is simply not true. Each sheet can have its own thread and if you are intelligent you can parallelise.

It's very linear in nature because of dependency of cells.

You can work around this.

We have excel sheets in excess of 100 megs of data on disk.

My brother works with data sets that can be over 100 GB on disk. Please take your toy sheets elsewhere.
 
PCI lanes take up physical space on a MB, because they require more pins/pads and more MB traces to run. They also make the CPU and MB more complex, and therefore more expensive. So it's a question of what value to the target audience is this feature providing for the cost?

And lets not get into personal attacks. I'm not saying that having some extra expansion space isn't a good idea. I'm asking how much is too much? If you want to use the car example, does it make sense for a single guy to buy a 15 passenger van? You might want more than a 2-seater just in case you have some friends along, but a huge van would be silly. The same case here, if your typical "power user" will peak out at 20-30 lanes, why make EVERY CPU with 64 lanes? There's a reason to have some future expansion capability. But there's a limit to how much you're actually going to need or want. If you can think of a use-case that needs 64 lanes in any kind of "home user" scenario, I'd be interested in the details.

First off, who said they wasted anything on it? Don't you think that the company who has been in the dumps for years has already thought at least one time in the past few years "I wonder if cutting X will save us any money?". Perhaps they all have 64 because that was the cheapest option with the way the CPU is built. Who knows, but I feel its a safer bet to say they've already thought about this and 64 made the most sense going forward.

Second, you already have a thread open with 2 people that give you examples of their setups with more than 40 lanes used, and your own example has that. I used to run Quad SLI and in the future may get back to it once I don't have a baby in the house with all those expenses. Even if we run with dual cards at 16 were looking at 32, I use a Physx card which requires another 4x minimum, I have the house wired for 10 Gb so as soon as I can get a good switch under 500 I'll get that (there's 8 more lanes). So we're at 44 lanes now. Add in at least one (although I'd prefer 2 but right now I just use one) PCIE SSD and we're at 48. 49 with the sound card. 53 if we count in the second PCIE card (one for OS, one for certain games only, all others on a spinner). So I easily hit 53 lanes. And that's still no where near an extreme bleeding edge setup. And that's not counting if the mobo will use any of those lanes.I used to have a 2x card for more SATA ports but that may or may not be needed in my desktop (I have them in my Plex server though, so I won't count that against you.

Right there I'm already at 83% capacity not counting if the system itself uses any lanes.

I'm already over most of Intel's offerings, maybe all, not sure, stopped keeping up with the blue team lately. So, again, on a high end part, how is it crazy to expect high end setups? This isn't you grandma check her email CPU, this is for people that add in a lot of shit. There are very easily made arguments for this, but you sound like the guy that complains there is a V10 in a viper... of course there is! Sure, random dude could get a 2 seater but think of what a contractor could do with a 15 pax van vs a prius. You keep talking about normal people, these aren't for normal people, no more than the viper is for a daily commuter to walmart

These aren't typical CPU's and they aren't meant for whoever you think is only going to use 20 lanes.
 
Hell, If I had enough lanes I may even switch my scratch disk and my encode disks over to PCIE if the prices were right ( I do a lot of encoding, hence the appeal of a 16 core monster for a good price), so there's an easy 4-8 more if I got both
 
  • Like
Reactions: N4CR
like this
I might have missed it but did they say that was a certain 2 card setup....what cards?

at their demo 16 core processor at what clock? 2 Video cards at ? clock and what cooling??
and no frame rates or settings on that 4k game play.
 
Last edited:
First off, who said they wasted anything on it? Don't you think that the company who has been in the dumps for years has already thought at least one time in the past few years "I wonder if cutting X will save us any money?". Perhaps they all have 64 because that was the cheapest option with the way the CPU is built. Who knows, but I feel its a safer bet to say they've already thought about this and 64 made the most sense going forward.

Second, you already have a thread open with 2 people that give you examples of their setups with more than 40 lanes used, and your own example has that. I used to run Quad SLI and in the future may get back to it once I don't have a baby in the house with all those expenses. Even if we run with dual cards at 16 were looking at 32, I use a Physx card which requires another 4x minimum, I have the house wired for 10 Gb so as soon as I can get a good switch under 500 I'll get that (there's 8 more lanes). So we're at 44 lanes now. Add in at least one (although I'd prefer 2 but right now I just use one) PCIE SSD and we're at 48. 49 with the sound card. 53 if we count in the second PCIE card (one for OS, one for certain games only, all others on a spinner). So I easily hit 53 lanes. And that's still no where near an extreme bleeding edge setup. And that's not counting if the mobo will use any of those lanes.I used to have a 2x card for more SATA ports but that may or may not be needed in my desktop (I have them in my Plex server though, so I won't count that against you.

Right there I'm already at 83% capacity not counting if the system itself uses any lanes.

I'm already over most of Intel's offerings, maybe all, not sure, stopped keeping up with the blue team lately. So, again, on a high end part, how is it crazy to expect high end setups? This isn't you grandma check her email CPU, this is for people that add in a lot of shit. There are very easily made arguments for this, but you sound like the guy that complains there is a V10 in a viper... of course there is! Sure, random dude could get a 2 seater but think of what a contractor could do with a 15 pax van vs a prius. You keep talking about normal people, these aren't for normal people, no more than the viper is for a daily commuter to walmart

These aren't typical CPU's and they aren't meant for whoever you think is only going to use 20 lanes.

It may be a simpler design to have 64 lanes, if they had fixed numbers per CCX or however they have RTR arranged internally. It WILL still make the MB, socket and RTR package itself more expensive. There's just no avoiding the fact that more pins and traces cost more to design and more to manufacture.

There certainly are cases out there where people may use that many lanes. Similar to how there ARE cases of people running quad-SLI and needing "workstation" level motherboards. I've never said it doesn't happen, my point was that it's exceedingly rare in general, and even pretty rare in the world of HEDT CPUs.

I'm not complaining that these CPUs exist, I'm wondering in the logic in bringing out that many models, all fully loaded. To use your "V10 in a Viper" analogy, this isn't like Dodge made a Viper with a V10, it's like they made 10 different models of Viper, all slightly different, and they all have V10s. There's just not enough of a market for Vipers to justify that wide range of products at the "top end" of the market. How do you distinguish your "top dog" product when they all are so similar in features?
 
If true....shots fired.

https://semiaccurate.com/forums/showpost.php?p=290705&postcount=594

Originally Posted by livebriand
Fott says $849 entry-level 16 Core TR - https://twitter.com/BitsAndChipsEng/...73386391891968

I would have hoped for $749 at the entry-level 16 core (couple of 1700's tied together...), and $549 for the entry-level 12 core, with another $50 off those being best case.
At AMD HQ are still talking about the prices, however this should be the final price (AMD is lowering the 1800X price in order to sell an entry level ThreadRipper 12C/24T @ about 500$). The main problems of this platform are the mainboards (Very expensive) and the RAMs (Also very expensive ... 4 channels).
 
It may be a simpler design to have 64 lanes, if they had fixed numbers per CCX or however they have RTR arranged internally. It WILL still make the MB, socket and RTR package itself more expensive. There's just no avoiding the fact that more pins and traces cost more to design and more to manufacture.

There certainly are cases out there where people may use that many lanes. Similar to how there ARE cases of people running quad-SLI and needing "workstation" level motherboards. I've never said it doesn't happen, my point was that it's exceedingly rare in general, and even pretty rare in the world of HEDT CPUs.

I'm not complaining that these CPUs exist, I'm wondering in the logic in bringing out that many models, all fully loaded. To use your "V10 in a Viper" analogy, this isn't like Dodge made a Viper with a V10, it's like they made 10 different models of Viper, all slightly different, and they all have V10s. There's just not enough of a market for Vipers to justify that wide range of products at the "top end" of the market. How do you distinguish your "top dog" product when they all are so similar in features?

They distinguish them by cores and price. Well that was easy.

And you did ask " If you can think of a use-case that needs 64 lanes in any kind of "home user" scenario, I'd be interested " so I gave you an example where I can pull 53 lanes without it being a server and only using 2 GPU's. And as prices come down on PCIE storage I may add 8 more lanes to that for a scratch drive and encoding drive that would bring me up 61 lanes. I'm a home user, I'm using it for gaming and encoding, Home User things. I'm not doing any deep learning or machine thinking or ray tracing machines or anything like that, just building a high end home system doing games and encodes

Staying with the car analogy theme, if you go to the viper website there are 5 base models of the viper for sale there, so obviously people like having options and if the market supports it there can very easily be multiple models at the top end
 
Now if they released like 8 models at 16 core and 64 lanes and each one was only 100 Mhz faster than the one before then yea, your stance would make sense

But, based off of leaks (we'll take it with a grain of salt but the pattern seems pretty obvious) and current release info we have in hand, there will be 2 models of every core count and that's it (maybe 3 or 4 of the 12 core only but I'm seeing different things on different sites)

That seems like a pretty obvious way to "distinguish" them

2 models for 16 core, one base and one slightly faster and with XFR (+300 Mhz stock)
2 models for 14 core, one base and one slightly faster and with XFR (+300 Mhz stock)
2-4 models for 12 core over an approx 1 Ghz spectrum and some with XFR (this will probably be the pricing sweet point which could be the reason for more models)
2 Models for 10 core, one base and one slightly faster and with XFR (+500 Mhz stock)

Seems like a pretty easy and simple way to distinguish them, and there are more viper models than 14 and 16 core models combined, so I imagine they will be just fine with the way they are running it
 
They distinguish them by cores and price. Well that was easy.

And you did ask " If you can think of a use-case that needs 64 lanes in any kind of "home user" scenario, I'd be interested " so I gave you an example where I can pull 53 lanes without it being a server and only using 2 GPU's. And as prices come down on PCIE storage I may add 8 more lanes to that for a scratch drive and encoding drive that would bring me up 61 lanes. I'm a home user, I'm using it for gaming and encoding, Home User things. I'm not doing any deep learning or machine thinking or ray tracing machines or anything like that, just building a high end home system doing games and encodes

Staying with the car analogy theme, if you go to the viper website there are 5 base models of the viper for sale there, so obviously people like having options and if the market supports it there can very easily be multiple models at the top end

Now if they released like 8 models at 16 core and 64 lanes and each one was only 100 Mhz faster than the one before then yea, your stance would make sense

But, based off of leaks (we'll take it with a grain of salt but the pattern seems pretty obvious) and current release info we have in hand, there will be 2 models of every core count and that's it (maybe 3 or 4 of the 12 core only but I'm seeing different things on different sites)

That seems like a pretty obvious way to "distinguish" them

2 models for 16 core, one base and one slightly faster and with XFR (+300 Mhz stock)
2 models for 14 core, one base and one slightly faster and with XFR (+300 Mhz stock)
2-4 models for 12 core over an approx 1 Ghz spectrum and some with XFR (this will probably be the pricing sweet point which could be the reason for more models)
2 Models for 10 core, one base and one slightly faster and with XFR (+500 Mhz stock)

Seems like a pretty easy and simple way to distinguish them, and there are more viper models than 14 and 16 core models combined, so I imagine they will be just fine with the way they are running it

Your example case wouldn't really be "home use", and it's pretty far out on the scale of "reasonable". I'm sure anyone could come up with a list of devices that WOULD use 64 lanes, but it's not anything close to a typical "power user" system. So to start, you're talking running SLI and then adding a physx card as well. Completely pointless by all benchmarks, unless you're trying to do nothing but consume lanes. It's fully possible to load up a system with cards and devices and quickly run out of lanes. The question is down to how many users will do this or need this? It's only a small percentage of users that by HEDT machines to start with, and of that small group, only a very small percentage of those users push over 44 lanes. So of the total number of RTR's AMD sells, how many will go over the 44 lane bar? How many would push closer to 64? I'd personally guess you'd be <5% of the users, which means it doesn't make sense from a cost perspective to make those features available on the entire product line.

The car analogy is getting abused here, but it's not a question of how many models there are, it's a question of how many of them have the top-of-the-line features. If you ONLY distinguish by frequency, then the overclockers are going to gut your high end sales. They might have more leeway to distinguish the models by core count in the RTR/i9 cases, and maybe cache quantity. Intel's business move to push people to the higher-end models by restricting the PCI lines is a good move for their bottom line. Kinda sucks for the consumer, but it's a good move on their part. That's how they've made money in the HEDT market
 
I see the different models as AMD selling defective 16C ThreadRipper processors as 14, 12, and 10 core processors. Even if they have to disable cores in good processors to meet the demand for a 10 core ThreadRipper model, AMD hasn't lost a penny as they all cost the same to make. AMD doesn't lose money off 1800X sales when a customer opts for the R7 1700 for example. The entire platform is going to be expensive because of the memory requirements and motherboard costs. Professionals will be happy with the cost as most likely AMD will try to undercut Intel and it seems to have plenty of power for them. Home users that can afford the platform are the type to buy Nvidia Titan XP cards and GTX 1080Ti cards, so they will be happy to have the most lanes PCIe lanes on a motherboard for semiprofessional use. I see this as a terrific value for professionals, and a good investment for those that want it all.

Say a guy has a wife and 2 kids. He could buy a couple of mid range video cards for the adults in the family and a couple of lower tier video cards for the kids. Then split the system into VMs where he does work from home on a 6C/12T processor, his wife has a 4C/8T processor, and the kids have 3C/6T processors. Each of them would have a separate video card to use and they could all work and game together off the same machine without a slowdown in performance.
 
Not true unless you only count the consumer tier products:
  • EPYC - server chip, release date June 20
  • Vega Frontier Edition - pro GPU, release date June 27
Both less than a month away. The consumer products that might conceivably be part of a gaming rig (I'm counting Threadripper for high end mixed use) were further out and more nebulous... Threadripper in "summer" so months away, Rx Vega they only announced they will announce more in two months.

(Ok I just noticed it was the consumer parts inc. APU's originally mentioned)

No I think Lisa said Vega will actually launch at Siggraph at the end of July. I am pretty confident that's what she said. So expect availability either same day of a week or so after announcement, since I consider launch to be availablility date.
 
Man seeing that its cosing AMD 110 to 120 dollars to make every chip that is amazing strategy. No wonder they already cut down prices for the Ryzen CPUs to make room for upcoming threadripper CPUs, looks like yields for ryzen and threadripper chips are really really good.
 
  • Like
Reactions: N4CR
like this
No I think Lisa said Vega will actually launch at Siggraph at the end of July. I am pretty confident that's what she said. So expect availability either same day of a week or so after announcement, since I consider launch to be availablility date.

You could consider launch to be anything you want, AMD will decide "availability" date whenever it is compared to launch. Siggraph might just be a good venue to announce, but I wouldn't be surprised if products are available up to a month away.
 
No I think Lisa said Vega will actually launch at Siggraph at the end of July. I am pretty confident that's what she said. So expect availability either same day of a week or so after announcement, since I consider launch to be availablility date.
That's consumer Vega. As I said the consumer products do have a couple months wait before release but the pro & server products start within the month -> "Vega Frontier Edition - pro GPU, release date June 27".
 
Back
Top