Getting best G-Sync performance in MAME & RetroArch

BlackGuyRX

n00b
Joined
Aug 12, 2013
Messages
33
After reading bigbluefe's latest thread regarding his YT videos and from my own personal experiences, it seems like RetroArch isn't performing too well with G-Sync enabled. While the lack of tearing & latency is really nice, I can't help but notice some slight stuttering during gameplay and I wasn't sure if that was some inherent side-effect with g-Sync or I'm doing something wrong. It doesn't help that info regarding G--Sync & emulation is rather scant.

What are the most optimal settings for RA to make it perform flawlessly in RetroArch? My monitor is an aoc g2460pf. Maybe I'm going crazy, but I think I'm seeing the same issues in vanilla UI MAME also.
 
never actually played MAME since I got my XBH270U and I've never used RetroArch. But if you think that G-SYNC is causing stuttering maybe try using the NCP to set MAME.EXE or RA.EXE (or whatever the RA executable is) to use ULMB instead of G-SYNC. I can't imagine that your frame rates or so low that you will benefit from G-SYNC anyway. At higher frame rates ULMB is better anyway.

This thread has some good info on G-SYNC vs ULMB http://hardforum.com /showthread.php?t=1841352
 
It seems MAME is still difficult to deal with, even with G-Sync. I went back to mameuifx64 0.169 only to deal with stuttering audio, and it's driving me crazy.
 
have you tried final burn by chance yet? not trying to trash talk mame(knowing some people) but I find finalburn to generally have better compatibility with things.
 
@blackguyrx!
Freesync with Retroarch 1.3.0 now works perfect (so G-Sync should also work)
i also had some slight stuttering - this was, because retroarch set the monitor frequenz to 59.xxx hz. try to edit the retroarch.cfg and set the monitor frequenz to your g-sync frequenz (mine is 144 hz). disable vsync, gpu-sync and audio sync -set frame throttle to 1.0 - now all games runs perfect without tearing....
 
have you tried final burn by chance yet? not trying to trash talk mame(knowing some people) but I find finalburn to generally have better compatibility with things.

Huh? MAME's basically the only program I've seen that's actually rock solid when it comes to running games at the proper refresh rate with a variable refresh monitor.
 
@blackguyrx!
Freesync with Retroarch 1.3.0 now works perfect (so G-Sync should also work)
i also had some slight stuttering - this was, because retroarch set the monitor frequenz to 59.xxx hz. try to edit the retroarch.cfg and set the monitor frequenz to your g-sync frequenz (mine is 144 hz). disable vsync, gpu-sync and audio sync -set frame throttle to 1.0 - now all games runs perfect without tearing....

I'm messing around with the latest nightly and have copied these settings (including bigbluefe's recommendation to disable "Threaded Optimization" and force GSYNC for the RetroArch exe. There's still some very minor hitching, but compared to before, it's actually tolerable, though not as smooth as MAME. I'm running it at 120hz.
 
I'm messing around with the latest nightly and have copied these settings (including bigbluefe's recommendation to disable "Threaded Optimization" and force GSYNC for the RetroArch exe. There's still some very minor hitching, but compared to before, it's actually tolerable, though not as smooth as MAME. I'm running it at 120hz.

Are you running the ASIO variant of MAME/GroovyMAME? If not, definitely look into that. It's a game changer.

MAME has basically ZERO audio latency with it. It's absolutely incredible. Once you experience it, there's no going back. Retroarch isn't good enough. Baseline MAME isn't good enough anymore. If you're not running MAME with ASIO, you're not doing emulation as well as it can be done.

https://www.youtube.com/watch?v=_7tZeKxi2wE
 
Last edited:
Are you running the ASIO variant of MAME/GroovyMAME? If not, definitely look into that. It's a game changer.

MAME has basically ZERO audio latency with it. It's absolutely incredible. Once you experience it, there's no going back. Retroarch isn't good enough. Baseline MAME isn't good enough anymore. If you're not running MAME with ASIO, you're not doing emulation as well as it can be done.

https://www.youtube.com/watch?v=_7tZeKxi2wE

Yeah, I saw that video, I don't think my setup supports ASIO.
 
Are you building a custom MAME box, or playing Emulated games on your daily PC?

I feel dirty even suggesting this, but for oldschool games, an RGB CRT with non-discrete refresh is going to give you the most authentic experience.

Its absolutely rubbish for daily use, though, and finding one is difficult enough.
 
Are you building a custom MAME box, or playing Emulated games on your daily PC?

I feel dirty even suggesting this, but for oldschool games, an RGB CRT with non-discrete refresh is going to give you the most authentic experience.

Its absolutely rubbish for daily use, though, and finding one is difficult enough.

I really don't think it is. There are too many new HD games that you'd want to play now on an arcade cabinet. CRT setups are only worth it if you have tons of disposable space in your house.
 
I really don't think it is. There are too many new HD games that you'd want to play now on an arcade cabinet. CRT setups are only worth it if you have tons of disposable space in your house.

Ah, makes sense. I was thinking Pre-2000 games.
 
Are you running the ASIO variant of MAME/GroovyMAME? If not, definitely look into that. It's a game changer.

MAME has basically ZERO audio latency with it. It's absolutely incredible. Once you experience it, there's no going back. Retroarch isn't good enough. Baseline MAME isn't good enough anymore. If you're not running MAME with ASIO, you're not doing emulation as well as it can be done.

https://www.youtube.com/watch?v=_7tZeKxi2wE
It should be noted that ASIO4ALL is an ASIO wrapper for Kernel Streaming drivers, which is not the same thing as native ASIO support.
What your video shows is that there's nearly zero latency between MAME and ASIO4ALL (looks like it's just over 2ms if you're using a 96 sample buffer at 48kHz) but the real audio latency will be between ASIO4ALL and your Realtek device's kernel streaming driver - which is not something that is going to show up in that recording.
The end result may still have lower latency overall, since MAME doesn't have support for kernel streaming devices, but the latency from ASIO4ALL to the KS driver is going to be non-zero.

With a sound device that has native ASIO support, it should be possible to achieve a true 1ms latency between the program's output and the speaker - though buffer sizes that low can be prone to buffer underruns if the CPU is being taxed.
I don't know what sort of buffer size ASIO4ALL will be using with the KS driver, but in other software I've generally found latency to be higher using KS instead of ASIO with my DAC.
With native ASIO I can reduce the buffer size to 64 samples for 1.33ms latency at 48kHz. (or 128 for 96kHz, 256 for 192kHz)
With my Creative sound cards, 2ms is achievable via ASIO, 1ms seems to have buffer issues. (crackling audio)

As long as the total latency is less than half a frame, I would be surprised if you have perceptible latency though, and that's certainly possible even if you're going through an ASIO wrapper and a Kernel Streaming driver.

That certainly beats RetroArch though, whose xaudio buffer can only be set as low as 16ms in the latest builds.
You can edit it via the config files, but I either get RA crashing on launch, or have crackling audio when I do that.
 
It should be noted that ASIO4ALL is an ASIO wrapper for Kernel Streaming drivers, which is not the same thing as native ASIO support.
What your video shows is that there's nearly zero latency between MAME and ASIO4ALL (looks like it's just over 2ms if you're using a 96 sample buffer at 48kHz) but the real audio latency will be between ASIO4ALL and your Realtek device's kernel streaming driver - which is not something that is going to show up in that recording.
The end result may still have lower latency overall, since MAME doesn't have support for kernel streaming devices, but the latency from ASIO4ALL to the KS driver is going to be non-zero.

With a sound device that has native ASIO support, it should be possible to achieve a true 1ms latency between the program's output and the speaker - though buffer sizes that low can be prone to buffer underruns if the CPU is being taxed.
I don't know what sort of buffer size ASIO4ALL will be using with the KS driver, but in other software I've generally found latency to be higher using KS instead of ASIO with my DAC.
With native ASIO I can reduce the buffer size to 64 samples for 1.33ms latency at 48kHz. (or 128 for 96kHz, 256 for 192kHz)
With my Creative sound cards, 2ms is achievable via ASIO, 1ms seems to have buffer issues. (crackling audio)

As long as the total latency is less than half a frame, I would be surprised if you have perceptible latency though, and that's certainly possible even if you're going through an ASIO wrapper and a Kernel Streaming driver.

That certainly beats RetroArch though, whose xaudio buffer can only be set as low as 16ms in the latest builds.
You can edit it via the config files, but I either get RA crashing on launch, or have crackling audio when I do that.

What's interesting is that, at least with MAME, ASIO4ALL seems to actually work better than Creative cards in terms of being able to use lower latency settings and still get no crackling.
 
Are you building a custom MAME box, or playing Emulated games on your daily PC?

I feel dirty even suggesting this, but for oldschool games, an RGB CRT with non-discrete refresh is going to give you the most authentic experience.

Its absolutely rubbish for daily use, though, and finding one is difficult enough.

It's a standard desktop I use for everything, including gaming. I'd like to build a HTPC with GroovyMAME & Lakka for a comfy CRT setup though.

And believe me, you have no idea how badly I want a CRT right now, especially the large professional broadcast displays. I'm lucky to even find a standard VGA display at this point.
 
What's interesting is that, at least with MAME, ASIO4ALL seems to actually work better than Creative cards in terms of being able to use lower latency settings and still get no crackling.
Lower total latency?
As I said above, you'll get "2ms" latency between MAME and the ASIO4ALL driver, but an unknown amount of latency between ASIO4ALL and your device's Kernel Streaming driver.

In my experience, there is generally not much difference between native ASIO, WASAPI Exclusive, and Kernel Streaming latency - and my understanding was that KS does not actually bypass the Windows audio stack in newer versions of Windows, while native ASIO does.
So even if you're getting "2ms" between MAME and ASIO4ALL, it's unlikely that the buffer being used by ASIO4ALL for the Kernel Streaming output is any smaller than you would be able to achieve via a direct native ASIO output.

So if you had to increase the buffer size to 4ms for native ASIO, it's likely that the minimum latency you have via ASIO4ALL is 2ms ASIO+4ms KS, or 6ms total.
I can't imagine a scenario where you would be able to achieve a lower latency going through ASIO4ALL than native ASIO, considering that it's going through two audio buffers instead of one.

It's possible that you have a device with native ASIO support which is not well optimized for low latency access, and a different audio device might be able to achieve better latency than it even with the combined ASIO4ALL+KS latency though.
 
Also to add, I'm still getting crackling audio in RA, though it seems to happen every minute or so. It was noticeable when I played StarFox 2 via Higan.
 
Also to add, I'm still getting crackling audio in RA, though it seems to happen every minute or so. It was noticeable when I played StarFox 2 via Higan.

Really, I'm done with Retroarch. At this point, I'm just waiting until MAME emulates everything well enough. After you've experienced low audio latency, every other emulator feels cheap and pedestrian.
 
RA 'still got custom integer scaling and the best shaders so in a way it is the preferred one for playing on flat panels, but GM is slowly catching up in terms of useability and even besting RA's performance under certain conditions (ASIO, frame_delay).
It'll take a bit of time but, for arcades emulation at least, GM will end up winning in popularity as well. Right now it's still a bit too raw though, hope the inclusion of MEWUI will make things more user-friendly (not that RA is any easier but GM doen't even have a fully compatible UI, a real pain).
 
How user friendly is GM? I want to mess around with the MESS portion of MAME again, mostly for MSX. I'm used to the GUI versus command line.

Well it's not the most user-friendly, mostly ini file editing and command line for some tasks (though relatively few: you can get away with just creating the ini at the beginning).
Though since the official MAME will soon include a much nicer default GUI, GroovyMAME naturally will get it too and at least browsing games and tweaking a few options will become friendlier. Most people might want to wait 0.171 to try GM a first time.
Still, I guess the main features that make GM attractive (SwitchRes, frame_delay, etc) will remain ini file stuff for a while. Basic use of those isn't complicated though.

So GM's worth as a main build; we know it is first aimed at '15Khz' setups with real low-res crt's, it is bare because stuff like a dedicated windows gui are superfluous for what people do with their mamecabs...

BUT its specific features actually allow it to be 1 frame faster than all other MAME builds even on LCD's simply by telling it to use d3d9ex, works from Vista to W10 afaik.
(RA of course is the other 'faster MAME' but for other reasons since it isn't a MAME build)
GM by default will automatically switch between vsynced to monitor's refresh and triple buffering by a Hz tolerance limit you can define manually, so for instance if you have a 60 monitor, games close to actual 60 get vsynced to it (smooth scrolling), and those too far off 60 for your taste, say by 2.5Hz, get triplebuffered to maintain the original speed.
-> again in both cases GM will still be 1 frame faster than other builds, even on modern Windows.
Note1: also like RA included is a display analizer and modelines maker you can let work automatically, GM will mind your monitor's actual refresh (not all monitors and TVs are exactly 60Hz).
Note2: it's also one of the few to retain a limited but at least existing integer scaling function (cleanstretch) for those who care about it. This will be improved and expanded in the future apparently.
Then, depending on your cpu/gpu strenght and monitor capabilities you can reduce lag even further by tweaking the 'frame_delay' and 'sync_offset' per-game as well as other things, but that's the advanced side of GM and I'm not sure how it'll do with G-Sync/FreeSync.

Probably those advanced features aren't useful with G-Sync/FreeSync anymore, but if d3d9ex works as well as 'normal' d3d with G-Sync, then you're still probably at an advantage using GM.
Add that ASIO support and you've got two things perfoming better than the current RA.

[/rant]
Now to answer you question :D I haven't personally tried the MESS features, so no experience with MSX, sorry.
 
You should always use ULMB. G sync and all its relatives are gimmics. The lack of tearing is nice, but I would rather be able to see what I am doing when scrolling / turning / aiming / panning the camera. As long as you keep the framerate high, the tears are very small and barely noticable. Just keep it over 85 and you are good.
 
Well it depends... how well does that work to compensate for hardwares like Seibu or Irem titles for instance that sync at around 54~55Hz ?
For someone who absolutely loathes the slightest tearing and lag with those, G-Sync is golden.
Then it's a matter of choice in regards to panel response, those old arcade games don't scroll insanely fast like pc FPS/RTS games (you realize what kind of games MAME is about, right?)

EDIT: Or, say, in the end we could just sort games that benefit more from G-Sync and others more from ULMB. Do we need a single solution to everything at once anyway ?
I have a counter-example to those really off-60Hz games that imho benefit more from G-Sync: top-to-bottom scrolling shooting games in landscape (as opposite to the usual portrait-oriented shooter in fullscreen), well those definitely benefit from strobing, and even massively if you use some scanlines overlay on top (e.g GigaWing)
 
Last edited:
G-Sync is a clear win over ULMB on a 144hz G-Sync monitor.

Monitor blur is heavily overrated. G-Sync for any game that doesn't run at 60hz, 120hz, etc. natively.

Every emulator is user friendly if you use a good frontend. I made a frontend that makes it trivial to play games using GroovyMAME with an arcade cabinet, but it's not really meant for desktop use.

https://sites.google.com/site/bigbluefrontend/
 
G-Sync is a clear win over ULMB on a 144hz G-Sync monitor.
Monitor blur is heavily overrated. G-Sync for any game that doesn't run at 60hz, 120hz, etc. natively.
Maybe it's not an issue if you only play fighting games, but for anything where the screen scrolls, motion blur is a serious problem.
 
Maybe it's not an issue if you only play fighting games, but for anything where the screen scrolls, motion blur is a serious problem.

I don't completely agree with that, because to begin motion isn't perfect on low-res 15KHz CRTs (the natural displays of those games) there is some loss of definition in particular during vertical scrolling (in landscape, so for games like GigaWing, or vertical/portrait shooters that have screen wobble) and scrolling in diagonal.
Stable horizontal scrolling though is free of it in apparence because pixel's aren't jumping over the mask gaps between the scanlines.
People who think CRTs at those frequencies have perfect motion simply don't know or haven't seen the real deal for ages (or never if they're too young).
I own rgb trinitrons and other shadow mask displays, been playing on those as well as in arcades for decades, and it's kind of funny to read what fantasies people have regarding those displays.

A 60Hz flat panel is fine for those pre-HD games as long as its response is instant or extremely close, and without artifacts, which so far only plasma and oled have almost achieved (plus a select few of the fastest TNs I have to admit).
I've seen what strobing can do to those games and seriously in many cases motion was superior to the real thing, with more details retained during those <-> and diagonal movements.
Whatever we do to make our old arcade/console games look less dramatically awful on lcd's should be mostly focused on completely eliminating pixel smearing/ghosting, not necessarily the normal motion artifacts, because again they're here too on CRTs.
Of course it's even nicer if they're eliminated too, but that means stepping into enhancement rather than reproduction.
 
I don't completely agree with that, because to begin motion isn't perfect on low-res 15KHz CRTs (the natural displays of those games) there is some loss of definition in particular during vertical scrolling (in landscape, so for games like GigaWing, or vertical/portrait shooters that have screen wobble) and scrolling in diagonal.
Stable horizontal scrolling though is free of it in apparence because pixel's aren't jumping over the mask gaps between the scanlines.
People who think CRTs at those frequencies have perfect motion simply don't know or haven't seen the real deal for ages (or never if they're too young).
I own rgb trinitrons and other shadow mask displays, been playing on those as well as in arcades for decades, and it's kind of funny to read what fantasies people have regarding those displays.
I don't think anyone would claim that motion on CRTs is perfect.
However I don't see how anyone can say that an LCD/OLED display with 16.67ms persistence (60Hz) is not significantly worse than a display with <1ms persistence - and then you have the LCD's response time on top of that.
I have a CRT that I keep around specifically because I can't stand to play 2D games on a flat panel display, due to the severe motion blur which is present on all of them.

If we are talking about emulation specifically, there is an argument to be made in favor of G-Sync for latency, but that is the only positive thing it does for emulation compared to a multi-sync CRT.

A 60Hz flat panel is fine for those pre-HD games as long as its response is instant or extremely close, and without artifacts, which so far only plasma and oled have almost achieved (plus a select few of the fastest TNs I have to admit).
I've seen what strobing can do to those games and seriously in many cases motion was superior to the real thing, with more details retained during those <-> and diagonal movements.
Whatever we do to make our old arcade/console games look less dramatically awful on lcd's should be mostly focused on completely eliminating pixel smearing/ghosting, not necessarily the normal motion artifacts, because again they're here too on CRTs.
Of course it's even nicer if they're eliminated too, but that means stepping into enhancement rather than reproduction.
I really have to wonder what kind of CRTs you're comparing things to, if you find LCDs with strobing options to be better for motion handling. That doesn't seem right at all.
But the discussion here was about G-Sync, which requires that the display is flicker-free.
So we're not talking about LCDs with strobe options, we're talking about full-persistence displays.
 
I don't think you're seeing well what I'm talking about. I'm talking about true low-res crts, the same seen in arcade cabinets, pro broadcast crts, or european consumer crt tellys.
If you have no experience with those you can't see what motion artifacts I'm specifically talking about nor why I'm comparing them to 60Hz motion blur; they're not the same thing, but (also depending on the lcd monitor side's performance of course) those two kinds of minor motion flaws are very similar on the output in practice, especially in the case of emulating old 2D arcade and console games and even more so when simulating scanlines/crt mask.
And I was also saying those artifacts in particular can disappear when using strobing on the emulation side, no matter if that technology has it's own flaws I was not getting into the challenges of the current new technology gimmicks here, nor was I 'ranking' stuff.
 
It should be noted that ASIO4ALL is an ASIO wrapper for Kernel Streaming drivers, which is not the same thing as native ASIO support.
What your video shows is that there's nearly zero latency between MAME and ASIO4ALL (looks like it's just over 2ms if you're using a 96 sample buffer at 48kHz) but the real audio latency will be between ASIO4ALL and your Realtek device's kernel streaming driver - which is not something that is going to show up in that recording.
The end result may still have lower latency overall, since MAME doesn't have support for kernel streaming devices, but the latency from ASIO4ALL to the KS driver is going to be non-zero.

Hi, I'm the developer of the GM ASIO patch, and I did a couple of comparisons between ASIO4ALL+Realtek, Xonar DGX ASIO driver and an SB Live! with the kX driver.

The measurements were done in the same manner as these ones here: http://forum.arcadecontrols.com/index.php/topic,141869.msg1469093.html#msg1469093

Code:
>> % A4A+RT
>> a4a = [ 81 ; 78 ; 70 ; 86 ; 83 ; 89 ; 77 ; 80 ; 88 ; 95 ];
>> % Xonar DGX
>> dgx = [ 70 ; 75 ; 82 ; 88 ; 94 ; 81 ; 88 ; 74 ; 86 ; 85 ];
>> % kX Live!
>> live = [ 73 ; 85 ; 87 ; 67 ; 80 ; 72 ; 87 ; 76 ; 75 ; 93 ];
>> meas = a4a; max(meas), min(meas), mean(meas), std(meas)
ans =  95
ans =  70
ans =  82.700
ans =  7.1188
>> meas = dgx; max(meas), min(meas), mean(meas), std(meas)
ans =  94
ans =  70
ans =  82.300
ans =  7.4394
>> meas = live; max(meas), min(meas), mean(meas), std(meas)
ans =  93
ans =  67
ans =  79.500
ans =  8.2496
>>

While the audio that goes through A4A+RT probably goes through another buffer, the practical effect is absolutely negligible (at least on my setup). The reason for the Live! getting slightly better results is attributed to the sampling points (as can be noted, standard deviation is higher).
 
Hi, I'm the developer of the GM ASIO patch, and I did a couple of comparisons between ASIO4ALL+Realtek, Xonar DGX ASIO driver and an SB Live! with the kX driver.
The measurements were done in the same manner as these ones here: http://forum.arcadecontrols.com/index.php/topic,141869.msg1469093.html#msg1469093
While the audio that goes through A4A+RT probably goes through another buffer, the practical effect is absolutely negligible (at least on my setup). The reason for the Live! getting slightly better results is attributed to the sampling points (as can be noted, standard deviation is higher).
Thanks for the reply.
If I'm reading this correctly, all three options are producing about 80ms latency? That seems awfully high.
The comparison that I was hoping to see would be using a device which supports native ASIO vs that same device using ASIO4ALL - then you can actually calculate the difference in latency between the two.
This comparison just shows that the Realtek drivers with ASIO4ALL produces about the same latency as those two other cards.

I'm also wondering about GroovyMAME's resampler.
I've found that with my X-Fi Titanium HD, the smallest buffer size I can use without any errors is 96 samples - which is "1ms" latency at 96kHz according to the driver - or "2ms" at 48kHz.
However I don't have the hardware required to measure what the actual latency is, from input to a sound being produced.
I'm wondering which would actually result in lower latency, or if it wouldn't make a difference.
 
Thanks for the reply.
If I'm reading this correctly, all three options are producing about 80ms latency? That seems awfully high.

Yes, it does. My guess at this point is that the total latency is game dependent. There are two factors in this - both the actual PCB latency and the latency added by the emulating driver. Street Fighter III has lower latency than Puzzle Bobble for instance. What the ASIO stuff currently does is to remove the latency added by DirectSound, which can be over 100 ms if not tinkering with the audio_latency setting.

The comparison that I was hoping to see would be using a device which supports native ASIO vs that same device using ASIO4ALL - then you can actually calculate the difference in latency between the two.

This is the comparison, both the Xonar DGX and kX Live! are using native ASIO drivers.

Edit: I see your point, however, I think it's mute. The same buffer sizes were used for all comparisons (64 samples). But I'll see if the DGX works with ASIO4ALL (kX does not) and see how that goes.

I'm also wondering about GroovyMAME's resampler.
I've found that with my X-Fi Titanium HD, the smallest buffer size I can use without any errors is 96 samples - which is "1ms" latency at 96kHz according to the driver - or "2ms" at 48kHz.
However I don't have the hardware required to measure what the actual latency is, from input to a sound being produced.
I'm wondering which would actually result in lower latency, or if it wouldn't make a difference.

Theoretically the latency should be lower when using 96 kHz, however, it's probably difficult to notice any difference between the two.
 
Last edited:
So, I redid the tests with the DGX+A4A compared to simply DGX.

Unfortunately, it's not a fair comparison. ASIO4ALL is evidently not able to access the hardware buffer of the card.

This is possible with Realtek, and is what the 'Pull mode' option does.

From the manual:

Note: For WaveRT drivers (Vista), this box is labeled &#8220;Allow Pull Mode (WaveRT)&#8221; instead!
Enables the hardware buffer for the highlighted device. This only works for so called &#8220;WavePCI&#8221;
miniports, as other types of WDM drivers do not usually allow direct access to the hardware buffer.
...
If hardware buffering is not supported by a particular audio device, there will be an additional latency of a
couple hundred milliseconds, which is clearly audible.


Which explains the result below (had to raise the buffer size to 384 for A4A, which explains the added latency).

Code:
dgx_native = [ 99.5 117.5 108.5 119.5 108.5 110.5 105 101.5 106 112 ];
dgx_a4a = [ 460 464 464 464 476 456 463 456 464 460 ];

My initial comparison was between ASIO4ALL+Realtek compared to two sound cards with native ASIO support. For all intents and purposes, the results were identical.
 
Thanks for taking the time to do that test anyway.

Yeah, it was worth the time spent since I now know what the pull mode option actually does. ;)

ASIO4ALL is a tricky beast. The only cards I've gotten it to work with consistently are integrated ones. And with those, the general audio/driver stability is superior to any native ASIO driver I have tested.
 
Back
Top