Getting best G-Sync performance in MAME & RetroArch

Yeah, it was worth the time spent since I now know what the pull mode option actually does. ;)

ASIO4ALL is a tricky beast. The only cards I've gotten it to work with consistently are integrated ones. And with those, the general audio/driver stability is superior to any native ASIO driver I have tested.

I spoke with MAME devs and the MAMEUIFX guy, and basically nobody wants to add ASIO because of the closed source DLL dependency. It's really a shame, because this is objectively the best MAME's ever been.

Can you share your thoughts about WASAPI with us? It seems like that's the only shot we've got at lower latency audio in an officially supported manner.
 
I spoke with MAME devs and the MAMEUIFX guy, and basically nobody wants to add ASIO because of the closed source DLL dependency.

Yeah, this is how I would reason if I were them as well. I never intended for this to be merged into mainline, but seeing the interest perhaps it's time to try and do something that could be merged (or maybe somebody beats me to it).

It's really a shame, because this is objectively the best MAME's ever been.

Thanks :) it really is a noticeable difference.

Can you share your thoughts about WASAPI with us? It seems like that's the only shot we've got at lower latency audio in an officially supported manner.

There are basically two parts needed for low-latency audio in MAME, ASIO/WASAPI and a resampler. MAME already has a built-in resampler, but the ASIO stuff doesn't use it, since I suspect the one in BASSASIO is of higher quality, and it is easier to use. The resampler needs to take three things into consideration, sample rate, game speed and the rate that the sound card actually is using (it's not what one would think, at least not for ASIO).

There's another tweak needed to keep the latency down as well, standard MAME uses a periodic timer for audio generation (50 Hz), this has been modified to happen at the end of each emulated video frame. The modification isn't strictly needed, but it lowers the latency a bit and also makes it easier to keep the latency as low as possible.

But if these conditions are met, WASAPI will probably work as good as ASIO.
 
I'm messing around with the latest nightly and have copied these settings (including bigbluefe's recommendation to disable "Threaded Optimization" and force GSYNC for the RetroArch exe. There's still some very minor hitching, but compared to before, it's actually tolerable, though not as smooth as MAME. I'm running it at 120hz.

did you checked the retroarch.cfg for this line?
video_refresh_rate = "59.950001"

this line should be set to your g-sync refresh rate. (audio-vsync-gpu sync off - frame throttle 1.0)
(if this does not help - then the problem seems g-sync with retroarch....)
 
There's another tweak needed to keep the latency down as well, standard MAME uses a periodic timer for audio generation (50 Hz), this has been modified to happen at the end of each emulated video frame. The modification isn't strictly needed, but it lowers the latency a bit and also makes it easier to keep the latency as low as possible.
If we're seeing 80ms+ latency even with ASIO, shouldn't audio be emulated as soon as possible instead of the end of the frame?
It makes sense to do things like sample inputs as late as possible, but shouldn't audio be as early as possible?

I'm also curious to know when real hardware plays audio. With things like lip-sync correction for video, it's generally recommended to get audio to play in the middle of the frame.
 
If we're seeing 80ms+ latency even with ASIO, shouldn't audio be emulated as soon as possible instead of the end of the frame?

This is what happens, even though it feels a bit backward. :)

When the audio update routine is called, sample generation up to the current emulated time is performed and passed to the audio backend. Perhaps more can be done to further reduce latency in the pipe, but I don't know the details of the implementation.

It makes sense to do things like sample inputs as late as possible, but shouldn't audio be as early as possible?

Absolutely.

I'm also curious to know when real hardware plays audio. With things like lip-sync correction for video, it's generally recommended to get audio to play in the middle of the frame.

I absolutely agree, I did some tests yesterday with the NES driver in the same manner as the previous tests, with results ranging from about 45 to 70 ms end to end. A real hardware comparison would be nice. I also did some tests with RA+MAME core, and the results were about the same as with MAME+DirectSound, even though I set the audio buffer to 16 ms.

Also, a note regarding the total latency, see this post: http://forum.arcadecontrols.com/index.php/topic,141869.msg1469234.html#msg1469234 , this is simply a test of how quick my system is able to react to ASIO when using the same input device as the one I use in MAME. It simply polls a getch or something and outputs a sine wave when triggered. So for input, we're having somewhere between 1 and 10 ms latency, which makes sense since the device is polled at 100 Hz. There's also a frame granularity of about 16 ms (at 60 Hz), so all in all this pretty well accounts for the variation in the measurements (max-min ends up at about 24-25 ms). Also, I used frame_delay for the measurements.
 
Bumping. I'm pretty desperate here. Gonna download the latest RA stable and try again, any new developments regarding G-Sync and emulation?
 
I bought a g-sync monitor, but with Mame don't work! Why? Maybe because I use ddraw?
 
Back
Top