AMD Demonstrates "FreeSync", Free G-Sync Alternative at CES 2014

Status
Not open for further replies.
I am speculating too, based on my technical background. Tell me how it works.
What is your background?

It is based on some of their old patents but has been improved/updated thanks to newer ideas/implementations and VESA standards.

Thanks to lanek @ B3d for finding an OLD patent.
 
What is your background?

It is based on some of their old patents but has been improved/updated thanks to newer ideas/implementations and VESA standards.

Thanks to lanek @ B3d for finding an OLD patent.

Just reading the abstract. This is different from my description only by who predicts the frame rate. I said software, and this patent is essentially saying hardware. You still need to know the frame rate ahead of time, which you can do with video but can't do with games. You still have some guessing to do. You can do very smart guessing for sure, but that doesn't mean you can predict the future.
 
What is your background?

It is based on some of their old patents but has been improved/updated thanks to newer ideas/implementations and VESA standards.

Thanks to lanek @ B3d for finding an OLD patent.

By the way, I studied CS in college and am working for a tech company, just to show you I am not a total idiot so we can discuss instead of get into a heated argument :)

My point is this: without some software running on the monitor to wait for a draw command from GPU, there is no way the monitor can hold VBI until the next frame. There is certainly no standard for that, and that's why GSync has to do it in hardware.

How can AMD tell the monitor to hold VBI until the next draw? How can the GPU know at the previous frame how long the next frame takes to draw? Usually that takes some extrapolation, but extrapolation means error.

Plus you still need VSYNC on, so you still have lag.
 
What is your background?

It is based on some of their old patents but has been improved/updated thanks to newer ideas/implementations and VESA standards.

Thanks to lanek @ B3d for finding an OLD patent.

If you check figure 4 of the patent, you can see that the patent requires determining the frame rate of the content before the frame rate of the display can be updated, which cannot be known for games. The patent talks about video more, which makes sense because the framerate of video are fixed. They suffer from stuttering and tearing too because their frame rates are usually near 24 fps, which is out of sync to the 30/60 fps for monitors. So this patent certainly helps videos, but either this is not the patent they based FreeSync design on, or FreeSync design has to predict the game's fps.

This is not required for GSync to work.
 
If there's lag from V-Sync, I don't want anything to do with it. Some people might, but I'd rather have tearing from no synching than have input lag.

Of course, we still have no idea; what I got most out of AMD's presentation was that they could easily allow for AMD GPUs to fully utilize G-Sync-equipped monitors- and that's cool.
 
By the way, I studied CS in college and am working for a tech company, just to show you I am not a total idiot so we can discuss instead of get into a heated argument :)

My point is this: without some software running on the monitor to wait for a draw command from GPU, there is no way the monitor can hold VBI until the next frame. There is certainly no standard for that, and that's why GSync has to do it in hardware.

How can AMD tell the monitor to hold VBI until the next draw? How can the GPU know at the previous frame how long the next frame takes to draw? Usually that takes some extrapolation, but extrapolation means error.

Plus you still need VSYNC on, so you still have lag.

Sorry, there has been a rash of new posters across a bunch of tech forums that are basically shilling for search results, it seems.

Obviously your post had much more depth and substance to it than theirs but a new user posting the exact same thing on 2 different sites, seemed a little suspicious. I apologize.

To get back to it. While I don't really feel comfortable with detailing what I was explained, I doubt I could explain it well enough anyways, have you read about eDP and DDM?
I got the feeling it is closer to the latter but possibly some sort of hybrid.

Edit- There is a big piece to this puzzle, aka Free-Sync, missing and it seems obvious and I just can't piece it together.
 
Last edited:
What is your background?

It is based on some of their old patents but has been improved/updated thanks to newer ideas/implementations and VESA standards.

Thanks to lanek @ B3d for finding an OLD patent.

Here:

Briefly, the present invention provides a technique for dynamically adjusting frame rate of a display based at least in part upon the image rate of content to be displayed thereon. ... Alternatively, or in addition to the use of the image frame rate, the updated frame rate can be determined based in part upon a power condition of the device or upon a user input provided to the device.

See, you need to know the frame rate of the content, and you cannot know that for a game.
 
Sorry, there has been a rash of new posters across a bunch of tech forums that are basically shilling for search results, it seems.

Obviously your post had much more depth to it than theirs but a new user posting the exact same thing on 2 different sites, seemed a little suspicious. I apologize.

To get back to it. While I don't really feel comfortable with detailing what I explained, I doubt I could explain it well enough anyways, have you read the eDP and DDM?
I got the feeling it is closer to the latter but possibly some sort of hybrid.

Well I did post the same thing on Overclock.net... I admit I have a Geforce card, but I am more interested in figuring out how NVIDIA can be so stupid as to miss an existing standard. To my best knowledge they didn't. They just chose the more expensive but better way.

Didn't really read eDP or DDM. Can't find a free spec. From Wikipedia it seems eDP can do something called "seamless refresh rate switching", but just from the name of it, it still requires knowing what refresh rate you want to target.

DDM sounds more likely, but I need to find the spec. That will probably be more relevant to mobile, although I don't see how it can't be used on desktop.
 
i'm just excited to know that the two rivals are tackling this problem

it's about time we got rid of vsync and tearing, etc...
 
Well I did post the same thing on Overclock.net... I admit I have a Geforce card, but I am more interested in figuring out how NVIDIA can be so stupid as to miss an existing standard. To my best knowledge they didn't. They just chose the more expensive but better way.
Cuda vs OpenCL? Not the first for Nvidia.
 
amd shows that you can eliminate tearing like gsync without a 200$ extra silly-cone chip.
i think thats wonderful.
I personally always disable vsync and when i see tearing its usualy riding an elevator or when turning quicky . When running forward or turning slowly almost never.


remove the directx bottleneck already amd. Reduced frame rendering times.Higher fps . lower cpu overheard. oh well 8 days to kaveri (and hopefuly mantle ;))

I leave you Green-knights to defend nvidias honor.
 
Cuda vs OpenCL? Not the first for Nvidia.

Don't know enough about CUDA to say for sure, but if there exist a VESA standard for NVIDIA to implement GSync equivalent functionality for the past decade, yet none of NVIDIA, AMD, Intel, and any of the monitor companies came up to use this standard to solve a problem John Carmack has been asking for so long, I can only conclude that the whole industry just went full retard for that period of time.
 
of coarse i haven't seen g-sync but isn't the whole idea, syncing the refresh-rate to the frame-rate every frame makes the motion smoother, so you can't tell your only at 30 fps.

and the whole reason for that is because even the big hitting gpu's can't even maintain 60fps on the demanding games with the settings turned up.

if thats the case then couldn't free-sync lock the framerate at something conservative (like 30fps) and it would apear smooth just like gsync at 30. (thereby "predicting" the frame-rate)

I think this kind of technology is important because it seems like video cards are getting faster way slower (transistor shrink troubles perhaps). and the 3d engine's are really pushing the boundries of what the hardware can handle with all the candy on

144 fps at 144hz refresh seems less and less realistic, and maybe we should hope that what seems smooth to the eye, even though its only 30fps at 30hz is good enough.

this new market push for even higher resolutions is another blow against high refresh-rates and frame-rates. new cable standards are coming out that just barely meet new display resolutions and refreshes, leaving no room for improvement, but that's a separate topic.
 
Well I did post the same thing on Overclock.net... I admit I have a Geforce card, but I am more interested in figuring out how NVIDIA can be so stupid as to miss an existing standard. To my best knowledge they didn't. They just chose the more expensive but better way.

Didn't really read eDP or DDM. Can't find a free spec. From Wikipedia it seems eDP can do something called "seamless refresh rate switching", but just from the name of it, it still requires knowing what refresh rate you want to target.

DDM sounds more likely, but I need to find the spec. That will probably be more relevant to mobile, although I don't see how it can't be used on desktop.

Nvidia loves proprietary tech and AMD tends to take a hardware agnostic approach so it's not surprising to me that Nvidia would go the proprietary route while AMD is trying to leverage existing standards. I don't really understand this well enough to be sure but many of your posts seem to confuse the vesa standard with AMD's adaptation of it.

Someone mention g-sync not having any lag but my understanding is that it creates a 1-2 ms lag, I have no idea if freesync is better or worse in this regard but the ideal solution would have no lag. It's good that both companies are working on a solution for this old problem but I don't want to be locked into a hardware ecosystem(gpu/monitor) to take advantage of something like this so hopefully whatever gets adopted is an open standard. It wouldn't be too big of a deal if I have to buy a new monitor to be able to use something like this but I would want it to work with any brand gpu in the future, at least until it's outdated.
 
I don't think this new technique is utilizing a standard that has existed for this very purpose, but rather a mix of features that were never intended for this sort of thing. Nvidia felt this screen-tearing issue was big enough to draw attention to and fix. Nvidia also chose to fix the issue with brute-force: money and hardware.

AMD looked at this and probably thought 'there's GOT to be an existing set of features that can fix this a bit more elegantly. And one of the R&D guys probably had an 'old elpasso' moment and saw an easy fix using a mix of existing features and a bit of software love.


Will we ever see it? Only if Nvidia Gsync becomes a thing.
 
It's a good demo because it shows that you really don't need a custom FPGA and a frame buffer to solve this problem. You should be able to amend the existing specs and do this just fine. I've yet to see a compelling technical reason you need a frame buffer on the display, just have the video card resend the frame if the maximum time between frames has passed. If you do that then all that is needed is a variable refresh mode added and have min and max frame delay numbers be added to the config info.

The real question that should be asked of AMD isn't are they going to make this a product but are they going to add support for it in their drivers and possibly drive adding a better version of it to the existing standards. It'd be nice to see the display makers do it but they're not exactly drivers of innovation in this area.

I'd guess a decent number of displays would be just fine with you delaying the frames without changing the refresh rate as you are basically under-clocking the bus instead of overclocking it.
 
Don't know enough about CUDA to say for sure, but if there exist a VESA standard for NVIDIA to implement GSync equivalent functionality for the past decade, yet none of NVIDIA, AMD, Intel, and any of the monitor companies came up to use this standard to solve a problem John Carmack has been asking for so long, I can only conclude that the whole industry just went full retard for that period of time.

I don't really think it was a priority for any three of them . It's easier for them to ignore it and push new cards out the door. To be honest, when did any if them have the customers best interest in mind?

It's all about money guys and Nvidia probably decided that the VESA standard just wasn't good enough. I seriously doubt Nvidia missed a standard... It was probably easier to implement and control hardware wise rather than through drivers/software. That's my take on it anyways...
 
Nvidia loves proprietary tech and AMD tends to take a hardware agnostic approach so it's not surprising to me that Nvidia would go the proprietary route while AMD is trying to leverage existing standards. I don't really understand this well enough to be sure but many of your posts seem to confuse the vesa standard with AMD's adaptation of it.

Someone mention g-sync not ha ving any lag but my understanding is that it creates a 1-2 ms lag, I have no idea if freesync is better or worse in this regard but the ideal solution would have no lag. It's good that both companies are working on a solution for this old problem but I don't want to be locked into a hardware ecosystem(gpu/monitor) to take advantage of something like this so hopefully whatever gets adopted is an open standard. It wouldn't be too big of a deal if I have to buy a new monitor to be able to use something like this but I would want it to workwith any brand gpu in the future, at least until it's outdated.

Tell me how am I confused.
 
Tell me how am I confused.

Like I said I don't fully understand freesync but some of your references to variable refresh rates sounded like they were more related to the power saving advantages that vblank has been used for so far. I was curious if you had more information than the article provided or whether you were extrapolating from information that's available about vblank and it's current applications(which it sounded like).
 
There are a couple issues I have with this "Free-sync sets VBI based on a predicted next-frame-render time". That certainly sounds plausible, and I would expect the accuracy of the prediction to be much better than you would expect.

However, my issue is if the frame takes longer than predicted to render, or longer than the VBI was set, then you get a refresh with no new frame, and set the VBI very fast (short time) as the frame should be done soon. This may seem a little problematic, but if the second interval is close, it may be alright. I am not sure what the visual implications of this might be.

This could result in them overestimating time to render. Any factor of safety or margin they have, say setting VBI to refresh 2ms after the predicted frame completes, introduces an artificial lag. Any prediction which is longer than the render will introduce lag.

For these reasons I would expect the tech to either: use some other method or have input lag. It would conceivably adjust its predictions if it was consistently ambitious, and introduce a wider safety margin to ensure the rendering was complete.

That being said, I'm sure the prediction is pretty much exact and excellent, so it's probably not that bad.
 
Last edited:
My reading of this tech, is that it's much more likely to introduce lag, as you can't know the output FPS before hand. So either AMD is *guessing* at the next predicted render time [where a miss could carry a VERY heavy penalty], or waiting until after the frame is displayed to handle setting up FreeSync, which would introduce a global lag time into the system [possibly trivial though]. Don't see any other way to pull this off software wise.
 
G-sync won't catch on in a way that is meaningful.

The most immediate growth potential for gaming related hardware is in the living room and there's no way your TVs are going to support that.

Probably a bad business decision to do what they did depending on how much it cost them but... no big surprise from current Nvidia.
 
i have a feeling neither of these 2 will catch to the public, maybe free sync will see a small adoption. i think the panel manufacturers will solve this problem with no help from Amd or Nvidia which is the right way, imo.
 
Also keep in mind...

With the experience AMD got developing the new standard VESA Display ID v1.3 for 4K display panels, I would say they are in a very good position to continue this development with FreeSync.
 
To me FreeSync looks like something that AMD just cobbled together as a "response" to G-Sync with no intention other than to deflate any perceived excitement over the G-Sync launch; i.e., yet another marketing gimmick. It doesn't look like something that they sat down and asked from square one, "How do we solve the problem of vsync, tearing and input lag?" So in that sense I'm not expecting much in the way of a good user experience... if it ever even makes it to market.

That's just my perception of it.
 
Also keep in mind...

With the experience AMD got developing the new standard VESA Display ID v1.3 for 4K display panels, I would say they are in a very good position to continue this development with FreeSync.

If this comes to fruition, AMD will need to do a lot of the legwork on their own. VESA is a standards committee and if you look at the timeframe required to ratify a standard, it always takes a year or longer. Therefore AMD can't sit around and wait in that manner. Now variable vblank is a standard already, but it isn't used on desktop panels as it is a power saving feature.

Now what won't work is what AMD did with HD3D. AMD essentially said, hey guys, here's our API, take it and run with it. "It's a free an open standard! Here's our API!" Only thing is, they didn't support it for shit and didn't update their software. TriDEF has the HD3D driver now and they actually charge money for it.

If AMD does this approach with freesync I guarantee it won't take off. They have to be proactive, go to panel vendors, and make the push to get this going. So far they haven't done that. Don't get me wrong, a G-sync alternative would be great, but .. I don't know. I don't see AMD being proactive about it - nvidia didn't announce G-sync until they had everything in place. As of right now, AMD has nothing in place but if they can get off their asses and make it happen, they could potentially do so. But nothing will happen if they just say "Hey guys, here's our API! Free and open standards! Come on panel vendors, take a look!" That was AMD's HD3D approach. And it didn't work for shit.

Hopefully, they learned the lesson from that. I would like to see AMD actually put their money where their mouth is for once. If that happens, the competition would be great. Who wouldn't want a free alternative? But, simply promising free and open standards won't work. Talking and releasing a driver or API hoping that panel vendors take notice won't work. What nvidia did, despite being a hardware solution, is they did all of the legwork - they invested millions into R+D, developed the logic board, and WENT to panel vendors to get the ball rolling. The net result is g-sync is about to hit the ground running. What nvidia DIDN'T do is say "hey guys! look at our API! Look at our FPGA schematic! You should use this!" I am really hoping, like I said, that AMD goes the extra mile to get this done. Because that is the only way it will get done. They have a lot of legwork to cover right now, let's hope they do it.

This ignores that a software solution like this, to me, doesn't sound like it will work to eliminate input lag. But i'm willing to wait and see. Anyway, i'm not trying to dog on AMD right now. If it sounds like that, let me know and i'll shut up. But you gotta remember. AMD has such a horrible track record on this type of thing. A really goddamn horrible track record in terms of following up on promises and making them actually happen. I am actually crossing my fingers that i'm wrong on this one. I am hopeful that AMD can deliver a free alternative, and they can only do that if they do a ton of legwork and are proactive. So we'll see.
 
Last edited:
Someone mention g-sync not having any lag but my understanding is that it creates a 1-2 ms lag
NVIDIA claims G-Sync reduces frame rates very slightly, but to my knowledge have said nothing about it introducing latency.
 
Cuda vs OpenCL? Not the first for Nvidia.

LOL. You know that OpenCL is effectively a clone of CUDA right? OpenCL was made in the image of CUDA, specifically because of CUDAs existence.

No CUDA, no OpenCL. They had to invent it for it to become a thing.
 
LOL. You know that OpenCL is effectively a clone of CUDA right? OpenCL was made in the image of CUDA, specifically because of CUDAs existence.

No CUDA, no OpenCL. They had to invent it for it to become a thing.

ad let us too remember that without BrookGPU (2002) -> CTM(2005)/Brook+ or Lib SH(2003), there would very doubtfully be a NvCg (2003) or CUDA (2007) to begin with either..

IIRC, while OpenCL(developed in 2007 by Apple) is similar to CUDA, it is s superset of C99. I know some like to think nV invented the Sun (and Apple the Moon).. alas hate to break it to you, they are/were hardly the 1st, they (both) just have superior marketing.
 
Last edited:
ad let us too remember that without BrookGPU (2002) -> CTM(2005)/Brook+ or Lib SH(2003), there would very doubtfully be a NvCg (2003) or CUDA (2007) to begin with either..

IIRC, while OpenCL(developed in 2007 by Apple) is similar to CUDA, it is s superset of C99. I know some like to think nV invented the Sun (and Apple the Moon).. alas hate to break it to you, they are/were hardly the 1st, they (both) just have superior marketing.

Ah yes Brook...! Great, extremely efficient, and mindmeltingly complex to program... first steps...
 
Ah yes Brook...! Great, extremely efficient, and mindmeltingly complex to program... first steps...

Yeah Brook was pretty much the grandaddy of them all, not the 1st, but certainly among the most complete for it's time (DirectX 9/OpenGL hardware.. ie 9700/5800 era). Brook "morphed" into Brook+/CTM then Stream and the AMD side. For nV it highly influenced Cg (DirectX/OpenGL implementation with MS), which brought about CUDA. IF OpenCL has any "parental" tree from which it sprang, the closet would be CG.
 
Ouch, if some monitors support or vendors start to support it, it can undersel G-Sync, and even work with older AMD hardware :|

I wonder if NV hardware also already works with this? It would be the best scenario for everyone (consumers)

It doesn't seem like NV's hardware works with this. According to TPU, Nvidia GPU's lack support for the dynamic refresh rates standard (something AMD have had since the 5000 series) and therefore need to add a module to the screen:

According to AMD's Raja Koduri, the display controllers inside NVIDIA GPUs don't support dynamic refresh rates the way AMD's do, and hence NVIDIA had to deploy external hardware.
http://www.techpowerup.com/196557/amd-responds-to-nvidia-g-sync-with-freesync.html

What was also interesting from the article above, is that most LCD's have this implemented:

Dynamic refresh is reportedly also a proposed addition to VESA specifications, and some (if not most) display makers have implemented it
http://www.techpowerup.com/196557/amd-responds-to-nvidia-g-sync-with-freesync.html

Here's how Nvidia does it on the screen instead:

With a G-Sync enabled display, when the monitor is done drawing the current frame it waits until the GPU has another one ready for display before starting the next draw process. The delay is controlled purely by playing with the VBLANK interval.

You can only do so much with VBLANK manipulation though. In present implementations the longest NVIDIA can hold a single frame is 33.3ms (30Hz). If the next frame isn’t ready by then, the G-Sync module will tell the display to redraw the last frame.
http://www.anandtech.com/show/7582/nvidia-gsync-review

Putting it on the screen as in Nvidia's G-SYNC, seems like the superior option, but getting it for free on existing hardware seems like a sweet deal if you don't want to pay the premium or want a larger selection of screens and not being locked to Nvidia.

I hope this will evolve into a hybrid standard though that works on both Nvidia and AMD hardware. :)

Would be cool to get "Freesync" for TV's, since it would be great for HTPC and the new consoles have hardware support for it as well. Sony and MS should push this to happen.
 
Last edited:
It doesn't seem like NV's hardware works with this. According to TPU, Nvidia GPU's lack support for the dynamic refresh rates standard (something AMD have had since the 5000 series) and therefore need to add a module to the screen:
This is a little inaccurate. They are adding the module because it is doing some other things, hence the big ram chips on it. They wouldn't need to add hardware to displays to do something AMD can do on a GPU. The technologies are different.

What was also interesting from the article above, is that most LCD's have this implemented:
The article is suggesting many display companies have implemented it. That means on various mobile devices they have supplied for. No display company has implemented this on desktop monitors.
 
This is a little inaccurate. They are adding the module because it is doing some other things, hence the big ram chips on it. They wouldn't need to add hardware to displays to do something AMD can do on a GPU. The technologies are different.

Inaccurate how? Are you suggesting that the display controllers in Nvidia GPU's does support the dynamic refresh rate standard and that TPU's article/AMD's Raja Koduri therefore was wrong?
 
If this comes to fruition, AMD will need to do a lot of the legwork on their own. VESA is a standards committee and if you look at the timeframe required to ratify a standard, it always takes a year or longer. Therefore AMD can't sit around and wait in that manner. Now variable vblank is a standard already, but it isn't used on desktop panels as it is a power saving feature.

Now what won't work is what AMD did with HD3D. AMD essentially said, hey guys, here's our API, take it and run with it. "It's a free an open standard! Here's our API!" Only thing is, they didn't support it for shit and didn't update their software. TriDEF has the HD3D driver now and they actually charge money for it.

If AMD does this approach with freesync I guarantee it won't take off. They have to be proactive, go to panel vendors, and make the push to get this going. So far they haven't done that. Don't get me wrong, a G-sync alternative would be great, but .. I don't know. I don't see AMD being proactive about it - nvidia didn't announce G-sync until they had everything in place. As of right now, AMD has nothing in place but if they can get off their asses and make it happen, they could potentially do so. But nothing will happen if they just say "Hey guys, here's our API! Free and open standards! Come on panel vendors, take a look!" That was AMD's HD3D approach. And it didn't work for shit.

Hopefully, they learned the lesson from that. I would like to see AMD actually put their money where their mouth is for once. If that happens, the competition would be great. Who wouldn't want a free alternative? But, simply promising free and open standards won't work. Talking and releasing a driver or API hoping that panel vendors take notice won't work. What nvidia did, despite being a hardware solution, is they did all of the legwork - they invested millions into R+D, developed the logic board, and WENT to panel vendors to get the ball rolling. The net result is g-sync is about to hit the ground running. What nvidia DIDN'T do is say "hey guys! look at our API! Look at our FPGA schematic! You should use this!" I am really hoping, like I said, that AMD goes the extra mile to get this done. Because that is the only way it will get done. They have a lot of legwork to cover right now, let's hope they do it.

This ignores that a software solution like this, to me, doesn't sound like it will work to eliminate input lag. But i'm willing to wait and see. Anyway, i'm not trying to dog on AMD right now. If it sounds like that, let me know and i'll shut up. But you gotta remember. AMD has such a horrible track record on this type of thing. A really goddamn horrible track record in terms of following up on promises and making them actually happen. I am actually crossing my fingers that i'm wrong on this one. I am hopeful that AMD can deliver a free alternative, and they can only do that if they do a ton of legwork and are proactive. So we'll see.

3D has never worked commercially. Hollywood has been trying to push it onto the masses for decades. Where is it now? Maybe, for once, AMD knew not to dump money into a dead end technology and decided to let others spend the resources. If it caught on, they would worry about it then.
 
Inaccurate how? Are you suggesting that the display controllers in Nvidia GPU's does support the dynamic refresh rate standard and that TPU's article/AMD's Raja Koduri therefore was wrong?

I think Koduri might have been misquoted. I read that he was asked why nVidia needed extra hardware and he said that he didn't know why, but it's possible that their GPU's don't support VBLANK.
 
Status
Not open for further replies.
Back
Top