Radeon Technologies Group Q&A On Reddit

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
For those of you interested, there is a Q&A on Reddit with the Radeon Technologies Group. The layout is terrible and you have to scroll all over the place looking for answers to questions but, if you have the time, there is some interesting stuff in there if you dig for it.

The Q&A is open to all topics dealing with anything under the Radeon Technologies Group (the Radeon division of AMD). Some big secrets could be unveiled this Thursday, so bring your best questions! Likely topics of discussion will most likely include Vulkan, FreeSync, GPUOpen, Polaris, Fury X2, VR, DirectX 12, and anything else you're curious about. Don't ask about Zen. It's still a super secret.
 
Not a lot of info. Entirely PR Fluff of stuff we already know. Nice to have it all in one place, though.
 
Not a lot of info. Entirely PR Fluff of stuff we already know. Nice to have it all in one place, though.
actually there is quite a bit of information/confirmation. I guess having green glasses makes one blind to REDdit's color scheme /wink,

just a few interdasting posts:


1)Are there any plans to incorporate some of radeon pro's features in future crimson drivers like texture lod controls, dynamic vsync ,mip map quality ect. ?

2)Will there be more antialiasing modes or options incorporated in future drivers ?

3)Could radeon settings have custom fan profiles and core voltage controls implemented in future drivers?

4)How satisfied are you with the current state of crimson drivers?

5)Will there be more tessellation improvements in the driver side for current 300 series cards?

6)Could ssao or hbao methods be implemented in future drivers for older games that do not possess any of these occlusion methods?

7)And finally Will there be any frame latency improvements?
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd

Vulkan and other low overhead API effects on performance, Freesync, confirmation of 14nm, TressFX and minimal impact of Perf due to openess
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd

Confirmation of Polaris hardware encode/Decode (previously from "Internal AMD slide")
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd

  1. Can you say if the previously shown Fury X2 boards eg are final? If so, are they?
  2. What direction is dual graphics going in? Between DDR4 and improvements to GCN the memory bottleneck is certainly being eased - can we expect an XDMA engine in future APUs?

  3. Is AMD actively working with any game developers on taking advantage of HSA?

  4. Bit similar to the above, but will GPUOpen include functions that leverage the capabilities of processor graphics cores, either through HSA or OpenCL? If so, will they support Intel's "APUs" at all, or will you be accepting patches from Intel that would add such support? Is this made more awkward to address by the fact that more expensive "enthusiast-grade" CPUs are less likely to include graphics cores?

  5. Any sign of Intel supporting freesync in the future? Would you even know if they were going to?

  6. Is it true that Kaveri has a GDDR5 controller, but no products using it were released because of poor prospects for market positioning or the total lack of hardware ecosystem support?

  7. Polaris is going to be in the next macbook, isn't it?

  8. ISN'T IT?

  9. More serious question in that I actually think you might be able to answer unlike the above 2 or 3, how easy is it for monitor vendors to initially adopt freesync, and to add more freesync products once they've done their first? Do you see there being a point in the future where Freesync is a universal or near-universal feature of new monitors?

  10. Where do you think VR will be in 5 years time, and what do you think the impact on 'conventional' gaming will be?

  11. There's a lot of buzz about "explicit multiadapter" in DirectX 12. Do you think DX12 and Vulkan are going to lead to more games supporting some sort of multiGPU technology (outside of one-chip-per-eye VR setups)?

  12. Is there a trend towards multigpu implementations that focus more on user experience than pure fps as in Civ:BE, or was that a bit of a one-off?

  13. What are you personally most looking forward to - VR, low-level APIs becoming commonplace, or the inevitable big jump in top-end "conventional" performance that will come sooner or later with the jump to 16/14nm?
    Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd
Update on DX12 and MS UWP
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd

Confirmation of DP 1.3
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd

Can we put the DX11 to rest, are we going to see DCL's added or not?

Can anything be said with more detail about how RTG views its current drivers and what its striving for?

How soon can linux users expect to see some love with some major gains in the Linux driver?

Any chance of integrating features or advanced settings that are featured in programs like RadeonPro, RadeonMOD and ditching raptr outright?

Are we going to see any rebrands this year or will everything be a fresh baseline with 14nm on the GPU side?

With the benchmarks we're seeing that take advantage of Async Support, will this be a feature require additional work from the devs or is it something easily implemented to where it will be the mainstream?

Any tricks up AMD's sleeve that we might see in Vulkan that may or may not be an easy addition to DX12?

Is there a specific reason the drivers still have issues of downclocking when a small program can be used to stop the downclocking issue?

Are there any plans to increase resources (including software engineers) allocated to Radeon Software including Drivers, GPUOpen materials and other products?

AMD Dockport, anything we can expect to see from this?
Radeon Technologies Group Q&A is happening here on March 3rd 10AM-5PM central time. • /r/Amd
 
Actually it seems pretty standard, just as any other tech AMA. And well, it is understandable that even if the one answering is working for AMD, he is also under their own internal NDA's and thus can't answer about specific Polaris micro arch until it is green lighted.

EDIT: thanks FrameBuffer for the lengthy post :)

Also another interesting, Polaris will be all 14nm GloFo
 
Holy crap that link is long... so far I like these comments:

Let’s say you have a bunch of command lists on each CPU core in DX11. You have no idea when each of these command lists will be submitted to the GPU (residency not yet known). But you need to patch each of these lists with GPU addresses before submitting them to the graphics card. So the one single CPU core in DX11 that’s performing all of your immediate work with the GPU must stop what it’s doing and spend time crawling through the DCLs on the other cores. It’s a huge hit to performance after more than a few minutes of runtime, though DCLs are very lovely at arbitrarily boosting benchmark scores on tests that run for ~30 seconds.

The best way to do DX11 is from our GCN Performance tip #31: A dedicated thread solely responsible for making D3D calls is usually the best way to drive the API.Notes: The best way to drive a high number of draw calls in DirectX11 is to dedicate a thread to graphics API calls. This thread’s sole responsibility should be to make DirectX calls; any other types of work should be moved onto other threads (including processing memory buffer contents). This graphics “producer thread” approach allows the feeding of the driver’s “consumer thread” as fast as possible, enabling a high number of API calls to be processed.

Mainly because that has been the bulk of the discussion for myself and a few members here for over 2 months.:p

We will add DirectFlip support shortly

This answers the 60fps lock in AotS and possibly the MS store 60fps locks.

I want to be clear that there is no graphics architecture on the market today that is 100% compliant with everything DX12 or Vulkan have to offer. For example: we support Async Compute, NVIDIA does not. NVIDIA supports conservative raster, we do not. The most important thing you can do as a gamer is to own a piece of hardware that is compatible with the vast majority of the core specification, which you do. That’s where all the performance and image quality comes from, and you will be able to benefit.

Guess now I need to know more about conservative raster. Guess it isn't being used yet since no one has really been mentioning it.

8-16x tessellation factor is a practical value for detail vs. speed, and this is what our hardware and software is designed around. Higher tessellation factors produce triangles smaller than a pixel, and you’re turfing performance for no appreciable gain in visual fidelity.

Need to keep this one close to the hip for those GameWorks battles.

One of the points of low-overhead APIs is to move the driver and the run time out of the way, and minimize their impact to the overall rendering latency from start to finish. Vulkan and DX12 both behave like this. And I want to be clear that they are designed like this because all graphics drivers need to get out of the way to achieve peak performance from an app.

Probably the best short explanation of the difference of DX11 and DX12.

NVIDIA had success with TressFX because we designed the effect to run well on any GPU. It’s really that simple. They were successful because we let them be. We believe that’s how it should be done for gamers: improve performance for yourself, don’t cripple performance for the other guy. The lesson we learned is that actual customers see value in that approach.

Actually the question for that answer was weird, to me at least:

Nvidia seems to have seen some success with their tressfx tessellation. That success wouldn’t have worked if nvidia hadn’t invested a lot in the hardware, but it also required a big investment in middleware, and on top of those two things, they needed games and game engines to use them. Can any lessons be learned from that?

Did they mean GameWorks there instead of tressFX? Would make more sense to me.

This is one of my favorite questions on the thread. In fact, interposers are a great way to advance Moore’s law. High-performance silicon interposers permit for the integration of different process nodes, different process optimizations, different materials (optics vs. metals), or even very different IC types (logic vs. storage) all on a common fabric that transports data at the speed of a single integrated chip. As we (“the industry”) continue to collapse more and more performance and functionality into a common chip, like we did with Fiji an the GPU+RAM, the interposer is a great option to improve socket density.

This looks important. lol I will figure out why later. :cautious:
 
  • Like
Reactions: c3k
like this
JustReason it is important because not everything can get scaled down to a new node in time, nor may be desirable to do so if the gains are too small. Hence with a proper interposer you can combine multiple chips at different nodes on a very fast package, and that is what they are doing with Fiji, they are putting the HBM chips and the gpu in the same interposer.
In the end with more advanced interposers they could fine tune each component to create their best version of price/performance, the basic example going forward was to move away from single big chips for multiple medium or small sized ones on the gpu side, to increase the total yield while keeping the performance high.
 
JustReason it is important because not everything can get scaled down to a new node in time, nor may be desirable to do so if the gains are too small. Hence with a proper interposer you can combine multiple chips at different nodes on a very fast package, and that is what they are doing with Fiji, they are putting the HBM chips and the gpu in the same interposer.
In the end with more advanced interposers they could fine tune each component to create their best version of price/performance, the basic example going forward was to move away from single big chips for multiple medium or small sized ones on the gpu side, to increase the total yield while keeping the performance high.
I think that is some of what was said just after... something about multiple chips to equal a bigger chip to forgo the costlier bigger chip cost (wafer cost and failure rates stuff). Thx for the info.
 
I was really, and expectedly, underwhelmed with this AMA. They had to have known everyone wanted Polaris info. I get why they couldn't, but I was really hoping for something. That said, the AMD rep seemed nice.
 
AMD PR people that I have met have been nice, and well informed within the bounds they are allowed to talk about. When I worked for CompUsa, AMD was supposed to sned a team out to demonstrate Eyefinity shortly after it came out, but tohe only one that showed up was the Rep. None of the techs that were supposed to set up the display appeared. Fortunatlely I had built several Eyefinity PCs for customers and had set them up and showed them how it worked, so I was able to get it all hooked up for him. He handled everything else with the Demo just fine and really knew how to get people's interest. Was cool to see.
 
[–]AMD_RobertTechnical Marketing 23 points24 points25 points 2 days ago (17 children)

ROCKET LEAGUE EVERY DAY. And Overwatch, and Fractured Space.

My specs:

Radeon R9 Fury X

Core i7-6700K

16GB DDR4-3000

Samsung 950 256GB

Acer XR341CK (glorious uw master race)

LOL. I like the lack of shame.
 
I was astonished that he shared goal for VR at AMD at 16K per eye and 240hz for VR to reflect reality. WOW! That is astronomically more complex graphically then any GPU or combo of gpu's can achieve today. 8 x 4k resolution x 4times in frequency = 32x more pixels per second than 4K at 60hz. That type of power is 10+ years off is my best guess.

TrueAudio is a great technology, why is it not used more? The real answer I think is that your competitor Nvidia has about 80% of the discrete market which does not support it meaning for developers to use would only be useful for the smaller hardware base. Another reason why nvidia can push GameWorks since the developer will reach about 80% of the hardware market that has a potential to use it. For AMD to do something like GameWorks, developers would probably not use it at all unless it worked flawlessly on Nvidia hardware and makes it easier for them plus has confidence that they will be supported if issues come up meaning for AMD why bother (AMD can only look in the mirror for this). Looks like best course for AMD is the Open standard stance and build up a larger community that can progress forward sharing code.

I expect the Interposer of the future to support multiple GPU's using HSA for the memory so both GPU's have access to all of the memory. GPU + CPU for high end low profile configurations? Maybe, who knows. It is a way to make a very dense, multi-layer mini circuit board for the most bandwidth needed components, keeping the motherboard much less complex.

VR is such a whole new area that can expand, I can see developers using medical equipment with a medical staff to evaluate effects of their code and how to prevent folks from throwing up, falling over to seizures and to prevent long term damage. Then add in augmented reality on top of that.

I see Polaris to be mobile/low and maybe mid range this June. AMD having something viable for the mobile market is a big segment that they lost to Nvidia with Maxwell but it actually started with Kepler. Quantity quantity and more quantity - AMD took a beating there big time and looks like they will come out swinging. The big boys will come out later.
 
LOL. I like the lack of shame.

That's nothing, Project Quantum featured a 4790K and AMD wasn't afraid of showing it off. When the inevitable question hit, they simply remarked "it was the best choice for what we wanted to accomplish", so I think the same could be said of Hollock's rig.

Hell, I'm willing to bet the machines doing the heavy lifting over at AMD engineering probably all use Intel. :p
 
That's nothing, Project Quantum featured a 4790K and AMD wasn't afraid of showing it off. When the inevitable question hit, they simply remarked "it was the best choice for what we wanted to accomplish", so I think the same could be said of Hollock's rig.

Hell, I'm willing to bet the machines doing the heavy lifting over at AMD engineering probably all use Intel. :p
Intel will fade in those AMD engineering labs and Zen will take over :D.
 
Back
Top