AMD Press Conference at Computex

Why because you said so. you want a card with no memory!m? HBM2 just went in to mass production it seems and may be they don't wanna launch 20k cards like they rumored and then have people all pissed off for low stock. Instead they decided to make a low quantity card available first then ramp up production on gaming rx Vega? I guess that makes too much sense. AMD can't really help Hynix fuckin up the last two quarters. Delaying and delaying HBM2. And finally it looks like it's ramping up now. No HBM2 in large quantities no gaming Vega sorry. Only reason you are seeing a pro card first. And fury was released with same issue. Low stock. Maybe they are trying to avoid that?

As you mentioned, this same shit happened with the Fury cards. They waited way too long to release the card because of the same excuse they are using here. They waited and waited and in the end, sure it was comparable to a 980ti stock for stock, but the 980ti had been out for a YEAR so the Fury was no longer relevant.

They keep trying to push the next big thing and they are wasting their efforts. We have seen it time and again. They want to be able to tell us that their product is way better than Nvidia and Intel because of how new and cool the tech is! ... Buuuutttttt then release things late and in a fucked up jumble.
 
Is that for real? No just builds fuckin 16 core rig to game at 1080p. How is gaming so damn aweful on Ryzen? All I have seen is paltform maturing and a lot of titles getting patched with significant boost at 1080p. By that account Intel should never launch anything higher then 4 cores. Because 1080p gaming is all we do and we dying for that 6 fps when we are at 120 already.

I was responding to someone who mentioned 2xGPU and threadripper. It doesn't make sense based on extrapolating off what we have now. It's a niche product for productivity, not gaming. VR is 90 Hz and may go higher. High Hz screens are becoming more available at 21:9 and 4K. AMD is behind in IPC, I have to imagine a 16 core rig is going to be way slower than the current 8 core per core due to thermal throttling.

The difference is Intel can hold higher Ghz at higher core counts. My 5960x can hit 4.8 under water.

Why would someone spend thousands on 2x 1080ti, monitor, ram, mobo, ssd, then gimp the whole thing by getting threadripper?
 
Last edited:
Alot of people use NVENC... Hell people use VCE and Quicksync even though the quality is trashy at times. Not everyone is going to stream it to a 2nd computer to do the encoding or shove an elgato in there to do the encoding, plenty of people look for ASIC solutions like that because they are consistent and they don't require large change to their system and they look just as good as x264 very fast without the cpu hit so large scene changes don't hit the computer hard.

Those GPU encoded streams are the ones that I skip on Twitch because they always look like trash. Now NVENC looks fine on Youtube for the most part because you can turn the bitrate to ridiculous levels when recording to a hard drive. You can't stream to Twitch or Youtube at those ridiculous bitrates though. So when you turn NVENC to "stream compatible" bitrates, it looks like trash; especially when there is a ton of grass in the scene or lots of movement. That's why CPU encoding rules the roost until AMD or Nvidia figure out how to increase the quality of low bitrate encoding on the GPU.

Hope that helps you to understand why content creators like more CPU cores over raw CPU speed. :)
 
I was responding to someone who mentioned 2xGPU and threadripper. It doesn't make sense based on extrapolating off what we have now. It's a niche product for productivity, not gaming. VR is 90 Hz and may go higher. High Hz screens are becoming more available at 21:9 and 4K. AMD is behind in IPC, I have to imagine a 16 core rig is going to be way slower than the current 8 core per core due to thermal throttling.

The difference is Intel can hold higher Ghz at higher core counts. My 5960x can hit 4.8 under water.

Why would someone spend thousands on 2x 1080ti, monitor, ram, mono, ssd, then gimp the whole thing by getting threadripper?

Not getting into the argument, but the new 8 core Intel chips only have 28 PCIe lanes. Your 5960X was replaced by a $999 10 core chip with 44 lanes I believe. Intel seriously nerfed their future lineup. :( I guess people could buy used chips in the future though.

14961443038dkmeqf0ca_1_1_l.gif
 
Those GPU encoded streams are the ones that I skip on Twitch because they always look like trash. Now NVENC looks fine on Youtube for the most part because you can turn the bitrate to ridiculous levels when recording to a hard drive. You can't stream to Twitch or Youtube at those ridiculous bitrates though. So when you turn NVENC to "stream compatible" bitrates, it looks like trash; especially when there is a ton of grass in the scene or lots of movement. That's why CPU encoding rules the roost until AMD or Nvidia figure out how to increase the quality of low bitrate encoding on the GPU.

Hope that helps you to understand why content creators like more CPU cores over raw CPU speed. :)
At 3500-6000 which you can do with twitch and youtube now of days it looks fine, especially for the lack of fps hit. Just be careful of the settings, 720-1080p nothing more at 20-30fps nothing more it's fine just mess with keyframes and preset. Scenes like a car moving through grass wipes out any setup using CBR outside of 2nd dedicated higher end PCs with enough buffer. Outside of that software like OBS doesn't quite use nvenc as well as it could, Kepler doesn't have the same capabilities as Pascal does even when you're talking just about avc codec, they also don't produce the same quality. It's not day and night that difference but it's quite a bit of difference. Kinda funny anyways given that 2nd gen Maxwell and Pascal cards have some neat stuff in them but nvidia's own software shadowplay doesn't flatout use it. Only hardware encoder i wouldn't use is VCE you can notice temporal degradation something you should never see, even with their modern gpus.

More cpu isn't the long term solution anyways, what will more cpu mean more cpu usage when games catch up, for streaming asic solutions are the poor man's solution. Nvdia, AMD and intel just need to catch up to that expectation to stream at the lower bitrates. Although i feel they probably already moved on from AVC and just waiting for streaming services to use HEVC w.e those royalties work out.
 
Last edited:
I'm a little surprised nobody here has tried to extrapolate or postulate on the handbrake demo. Threadripper vs Ryzen.

Same.

The GPU demo at the end was interesting for the lack of information :/ it was quick, and not much clapping going on lol. I'm still very curious though.

Threadripper looks to be rather neat, and crap that thing is big!
 
anyone have a summary?
a. apu
b. laptops sub 1k$ with vega apu? that would be nice
c. vega highlights?
 
Wow, computex press conference, standing room only: 0 products!

As an AMD fanboy, I'm very sad.
 
I am still sticking with my gut that Vega will underachieve, and I don't really blame the current AMD for that.


Understand that Polaris and Vega were the last vestiges of the Islands GPUs - technically, they are based off that architecture and would have finished up that architecture had AMD not changed gears and internal code names.

As such, all Raja could do is improve on existing architecture to various degrees, and then try to sell off the stock to stay solvent until they can get their own card in the pipe.

Polaris was well into development when Raja took over, and Vega likely already had begun their passes in initial engineering since again, everything was based on the Islands architecture. For Polaris, they were able to make a minor Frankenstein modification to it, and I am sure because it was still early, they will have even more tech in Vega, but it still will have the limited Islands tech at its core. As such, I think performance will be hampered by this, as the Islands architecture is not a competitive design to what NVidia has, imho.

Navi will be the first card 100% designed by Raja's team, top to bottom. We won't really know just how good AMD's Radeon division can be until Navi, because frankly, I think they are just taking lemons and making lemonade right now. I wouldn't be surprised if Vega ends up saturating the mid-market the way the Polaris cards captured the low end, and the AMD "bang for your buck" trend continued.

As far as any faith goes in Navi, I have zero expectations because frankly, I have literally nothing to go on. Polaris, and likely Vega will be Frankenstein cards, so I have no idea how I'd separate the new tech from the old Islands core in terms of a card performance. I'm sure the engineers know, because I wouldn't be surprised if they are using Vega as a guinea pig for some of that new tech, but we on our end won't know just how good it is until we have a new architecture at the core of it, and that won't happen until Navi.

Just my take. I hope I am wrong about Vega, but knowing how long it takes to bring a video card from conception to R&D to production, I think the timing works out more in favor of what I just laid out. Hopefully Vega can indeed bridge the gap, but if there is going to be an NVidia Killer, it will be Navi - and if Navi isn't, it may end up being a Radeon-killer as the RTG will have finally run out of excuses if their own designed card flops.
 
Wow, computex press conference, standing room only: 0 products!

As an AMD fanboy, I'm very sad.
You knew they released an 8 core cpus and 4 core apus with vega to oems for release in a few months? I had no idea and I am hyped about that, anything to get rid of this gsync required external monitor pos 1070
 
Nice to know that all Threadripper Cpus will have 64 lanes of PCI-E.

Still the RX Vega remains exclusive. I think that is what most people are waiting for right now. Certainly doesn't help that the RX 480/580 market is destroyed due to miners grabbing all the cards.
 
Well, looks like I'll be partying like it's 1998 soon lol, jumping back to the red team since the first time since Athlon XP days
 
  • Like
Reactions: N4CR
like this
While 64 lanes would be nice, I'm not sure what the hell you'd use them for.

16 + 16 for an SLI setup (2-way SLI is still OK, but anything higher is dead)

4 + 4 + 4 for a trio of M.2 SSDs (a bit of overkill really, I'd still use one large M.2 for OS/games, and a SATA solution for bulk storage)

That's 44 lanes so far...what the hell am I gonna do with 20 extra lanes? Even adding in a TB3 card or a 10Gb NIC will only use a couple lanes.
 
Nice to know that all Threadripper Cpus will have 64 lanes of PCI-E.

Still the RX Vega remains exclusive. I think that is what most people are waiting for right now. Certainly doesn't help that the RX 480/580 market is destroyed due to miners grabbing all the cards.

While it sucks for us, AMD should be thanking the miners because those dudes are helping AMD liquidate all their stock.
 
True but funny thing is you understood what I meant. Sorry I don't proof read on my phone. You would be surprised how good of a writer I actually am. But I usually don't care when I am not graded or if I am being judged by a someone behind a keyboard on the internet.

I have no qualms with your writing, it just made my day to read that between the droves of opinions pertaining to how AMD should be running their company.
 
  • Like
Reactions: NKD
like this
While 64 lanes would be nice, I'm not sure what the hell you'd use them for.

16 + 16 for an SLI setup (2-way SLI is still OK, but anything higher is dead)

4 + 4 + 4 for a trio of M.2 SSDs (a bit of overkill really, I'd still use one large M.2 for OS/games, and a SATA solution for bulk storage)

That's 44 lanes so far...what the hell am I gonna do with 20 extra lanes? Even adding in a TB3 card or a 10Gb NIC will only use a couple lanes.

With deep learning you can train 4 models on 4 cards in parallel - there's your 64 lanes.

that will 4x your productivity as a researcher, worth every penny
 
With deep learning you can train 4 models on 4 cards in parallel - there's your 64 lanes.

that will 4x your productivity as a researcher, worth every penny

I get that there are use-cases out there. I'm wondering what the justification would be outside the lab, research center, or server farm.
 
I get that there are use-cases out there. I'm wondering what the justification would be outside the lab, research center, or server farm.

Movie Encoders/Transcoders
Ray tracers
CAD/CAM
Engineering applications
compilers will use all those threads too.
 
While 64 lanes would be nice, I'm not sure what the hell you'd use them for.

16 + 16 for an SLI setup (2-way SLI is still OK, but anything higher is dead)

4 + 4 + 4 for a trio of M.2 SSDs (a bit of overkill really, I'd still use one large M.2 for OS/games, and a SATA solution for bulk storage)

That's 44 lanes so far...what the hell am I gonna do with 20 extra lanes? Even adding in a TB3 card or a 10Gb NIC will only use a couple lanes.

The big thing is that you can have SLI, Crossfire, or Tri-Fire without having any card drop to x8 lanes. Extra SATA and USB ports over the base specifications for a cpu tend to cannibalize PCIe lanes so that's potentially another 2-4. An add in sound card takes one more. With an Intel platform you'd already be at either x8 or even x4 for at least one video card and likely have multiple SATA and/or USB ports disabled to even run a second m2 drive.

Then there are other add in cards for the pro-sumer market either in terms of high density and bandwidth PCie storage solutions while still having a rendering machine. Just think of a rendering farm unit that had 7 single slot video cards each using x8 lanes while holding data sets in RAM or a RAID0 m2 mini array.
 
The big thing is that you can have SLI, Crossfire, or Tri-Fire without having any card drop to x8 lanes. Extra SATA and USB ports over the base specifications for a cpu tend to cannibalize PCIe lanes so that's potentially another 2-4. An add in sound card takes one more. With an Intel platform you'd already be at either x8 or even x4 for at least one video card and likely have multiple SATA and/or USB ports disabled to even run a second m2 drive.

Then there are other add in cards for the pro-sumer market either in terms of high density and bandwidth PCie storage solutions while still having a rendering machine. Just think of a rendering farm unit that had 7 single slot video cards each using x8 lanes while holding data sets in RAM or a RAID0 m2 mini array.

Agreed, but it's been proven that SLI and CF don't really benefit from >8 lanes of PCI 3.0. That might change in the future, but not any time soon. You can get a lot of USB and SATA ports on the old PCIe 2.0 lanes, or all on a single 3.0 lane if needed. Sound cards are the same, there's a very limited market for them these days. MB audio is pretty decent, and people that want more usually go with external DACs.

That's really my point...individually, there's plenty of cards or reasons to need a PCIe slot. But there's not much reason to need ALL those lanes in a single machine. You're talking a machine with 3 GPUs + a video capture + 3 m.2 drives + 20 USB 3.0 and SATA ports + internal sound + 10Gb NIC. The market for that level of machine is extremely small, yet we have a whole line of CPUs that support it.
 
Agreed, but it's been proven that SLI and CF don't really benefit from >8 lanes of PCI 3.0. That might change in the future, but not any time soon. You can get a lot of USB and SATA ports on the old PCIe 2.0 lanes, or all on a single 3.0 lane if needed. Sound cards are the same, there's a very limited market for them these days. MB audio is pretty decent, and people that want more usually go with external DACs.

That's really my point...individually, there's plenty of cards or reasons to need a PCIe slot. But there's not much reason to need ALL those lanes in a single machine. You're talking a machine with 3 GPUs + a video capture + 3 m.2 drives + 20 USB 3.0 and SATA ports + internal sound + 10Gb NIC. The market for that level of machine is extremely small, yet we have a whole line of CPUs that support it.

Lane count matters at very high resolutions and with lots of eye candy. Think 12+ mega pixels rather than 2+ mega pixels. Kabylake would be down to having one GPU at x4, which is noticeable even on a 4K screen. As for SATA, one saturated SATA III port is most of a PCIe 3.0 lane.
 
Comfortably above 60 lol.


Ryan Smith confirmed the demo was running at 30hz btw

Lol he is making typos. I think he meant above 60 at both spots. He said about 60 and then comfortably above 60 and more than any single solution. Lol. It may just be a language barrier for him, I mean no insult there. He can't say about 60 and above 60 in same sentence lol. I am prett sure he meant above 60 for both.
 
Lol he is making typos. I think he meant above 60 at both spots. He said about 60 and then comfortably above 60 and more than any single solution. Lol. It may just be a language barrier for him, I mean no insult there. He can't say about 60 and above 60 in same sentence lol. I am prett sure he meant above 60 for both.

Considering the performance of a single 1080 in that game, dual Vega running anything below 100fps is a travesty
 
Lane count matters at very high resolutions and with lots of eye candy. Think 12+ mega pixels rather than 2+ mega pixels. Kabylake would be down to having one GPU at x4, which is noticeable even on a 4K screen. As for SATA, one saturated SATA III port is most of a PCIe 3.0 lane.

I'd like to see the testing you had to show that. All the stuff I've seen shows at best 1-2% difference between a top-end card (Titan Xp / 1080Ti) at 16x vs 8x, even at 4k w/ everything on. I'm not saying I wouldn't LIKE to have 16x, but I'm question how much it really matters. Similar with the SATA question, what application will you be running on a regular basis that will saturate all your SATA ports at once?
 
Agreed, but it's been proven that SLI and CF don't really benefit from >8 lanes of PCI 3.0. That might change in the future, but not any time soon. You can get a lot of USB and SATA ports on the old PCIe 2.0 lanes, or all on a single 3.0 lane if needed. Sound cards are the same, there's a very limited market for them these days. MB audio is pretty decent, and people that want more usually go with external DACs.

That's really my point...individually, there's plenty of cards or reasons to need a PCIe slot. But there's not much reason to need ALL those lanes in a single machine. You're talking a machine with 3 GPUs + a video capture + 3 m.2 drives + 20 USB 3.0 and SATA ports + internal sound + 10Gb NIC. The market for that level of machine is extremely small, yet we have a whole line of CPUs that support it.

I'd rather have more than less. No clue what the future holds or how those lanes could be repurposed, but I'll take more for less money than Intel's less for more money plan
 
  • Like
Reactions: N4CR
like this
Considering the performance of a single 1080 in that game, dual Vega running anything below 100fps is a travesty

Well fury x is running at 45 fps. So dual vega cant really be slower than dual fury x. Lol.
 
Comfortably above 60 lol.


Ryan Smith confirmed the demo was running at 30hz btw

Jebuz. The only thing I can hope for is that AMD is doing a fake out move to keep nvidia in the dark. But it doesn't look good.
 
Back
Top