Can USB Really Deliver Hi Audio Fidelty?

DWD1961

[H]ard|Gawd
Joined
Nov 30, 2019
Messages
1,314
Currently I've just been using my internal adapter's Bluetooth with AptX, and when possible, the WiFi I know WiFi can deliver high Fidelity with an internal WiFi card, but what about USB connections using ether a WiFi dongle to sink or Bluetooth aptX USB to sink?

Can a USB connection really deliver hi fidelity, or do you need an internal card?

(My current system is an ITX build, so I have no internal PCI slots available. ITX only comes with one PCIe slot. That's where the video card goes, unless you are using an SoC, which am not.
 
Some USB connection are 10 gigabits second, that 85 time more bandwidth that a 100 gig bluray audio and 4K video signal, hard to imagine USB 2 (480mbps) or higher could have issue with any imaginable audio content-data.
 
Only issues with USB are latency and data integrity. Latency is generally acceptible, and data integrity is generally only a concern with shitty/mangled cables.
 
One of the issues I've run into with USB audio adapters is noise in the output -- USB is a pretty noisy bus, and sometimes gets up to 500mV of high frequency noise depending on the devices you have plugged in. I've found the best option is an optical DAC, if you have an optical audio output available on your motherboard.
 
In that situation, the limiting factor for audio quality is going to be by faaarrr the Wi-Fi or Bluetooth part of the signals journey, not the USB part. USB is used as a connection for all sorts of pro audio equipment (ie gear used to make content). Bluetooth on the other hand is garbo for audio so any audio quality loss is going to happen there.

edit to add: the USB latency is a valid concern but yeah I don't see it as being the weak link in a system that involves beaming real-time audio over BT/WIFi
 
I think USB is basically fine, but I still prefer optical because it's easier to deal with - I have all my sources going to an optical 4x2 matrix switch so any source can go to either the speakers or the headphones (or both).

Plus electrical isolation...
 
One of the issues I've run into with USB audio adapters is noise in the output -- USB is a pretty noisy bus, and sometimes gets up to 500mV of high frequency noise depending on the devices you have plugged in. I've found the best option is an optical DAC, if you have an optical audio output available on your motherboard.
It has an internal SPDIF riser, but it says it's for expansion cards. So, no, it doesn't have an optical rear port. But I'll remember that the next time i buy. This board is set u t use an SoC with all of the video etc ports on the back to support an SoC, which I have disabled in the BIOS.
 
It has an internal SPDIF riser, but it says it's for expansion cards. So, no, it doesn't have an optical rear port. But I'll remember that the next time i buy. This board is set u t use an SoC with all of the video etc ports on the back to support an SoC, which I have disabled in the BIOS.
What board is it? It doesn't have mini-toslink in the line-out port?
 
My Strix z590i does not have an optical out so I bought a USB to optical device. Works great.
 
In that situation, the limiting factor for audio quality is going to be by faaarrr the Wi-Fi or Bluetooth part of the signals journey, not the USB part. USB is used as a connection for all sorts of pro audio equipment (ie gear used to make content). Bluetooth on the other hand is garbo for audio so any audio quality loss is going to happen there.

edit to add: the USB latency is a valid concern but yeah I don't see it as being the weak link in a system that involves beaming real-time audio over BT/WIFi
Yeah, I wish more manufacturers would produce WiFi amps/or speakers instead of AptX and BT. They are really hard to find. WiFi is 105 lsless and passes any codec through un-recompressed, unlike any BShit codec that always re-compresses, even aptX HD. Man, who is going to listen to all of their music in FLAC all of the time? It's ridiculous, really.

My preference is for a WiFi enabled amp, through the use of a dongle and then passive speakers to the amp itself, where the amp uses it's own high quality DAC. Finding that is pretty much "go fuck yourself" on that front.
 
Yeah, I wish more manufacturers would produce WiFi amps/or speakers instead of AptX and BT. They are really hard to find. WiFi is 105 lsless and passes any codec through un-recompressed, unlike any BShit codec that always re-compresses, even aptX HD. Man, who is going to listen to all of their music in FLAC all of the time? It's ridiculous, really.

My preference is for a WiFi enabled amp, through the use of a dongle and then passive speakers to the amp itself, where the amp uses it's own high quality DAC. Finding that is pretty much "go fuck yourself" on that front.
Didn't know that about Wi-Fi! Makes sense, it has the bandwidth for lossless audio. BT is convenient and standard but yeah... garbo quality.
 
Didn't know that about Wi-Fi! Makes sense, it has the bandwidth for lossless audio. BT is convenient and standard but yeah... garbo quality.
WiFi has no compression at all, no codecs, it's just a 1 to 1 transfer of bits. The reason BT won over WiFi is that WiFi has to have a middle man to connect, which is our routers. That means there isn't really anyway to link a WiFi device directly to another device, without a proprietary dongle - such as a WiFi mouse's dongle. Notice that Bluetooth never really did jack until mobile became a "thing" simply because of as you put it, "garbo" quality. The great thing about WiFi is that you can device-sink any codec, mp3, aac, ogg, FLAC, and it just passing it all along without any compression. Bluetooth codecs, regardless of the encoding used, ALWAYS re-compresses to some degree. It's not suppose to do as little as possible, but it still does it.

I probably need to go back and do more research, but currently my understanding of aptX is that if you don't start with an uncompressed format, such as FLAC, you probably won't hear any difference between aptX HD and SBC at the sink device. It's true that regular old aptX can deliver CD quality music, but it has to start with an lossless format, such as FLAC. (Incidentally, SBC can also do CD quality, which is 44-48kHz at 16bit). That is completely counter to why people would want aptX or any aptX on their phones because they aren't going to be carrying around uncmpressed files (Also, there is the streaming equation that never streams uncompressed). Most mobile centric users don't even know what a codec is, much less the FLAC codec. They stream music at 128kbps, or at most currently, 320kps in mp3, and from my research, their expensive aptX HD headphones don't sound one bit better than the SBC codec (which according to my research, really is pretty decent and gets a bad rap). For instance, if you feed the SBC codec FLAC, it can spit out 350+ kbps, and that's pretty much indistinguishable to the human ear to a non-compressed source. SBC is really quite good: "SBC supports mono and stereo streams, certain sampling frequencies up to 48 kHz. Maximum bitrate required to be supported by decoders is 320 kbit/s for mono and 512 kbit/s for stereo streams." --Wikepedia

and

"SBC is a simple and computationally fast codec with a primitive psychoacoustic model (with simple auditory masking) using adaptive pulse code modulation (APCM).
A2DP specification recommends using two profiles: Middle Quality and High Quality."
04221c45ca31af7b9b5b7a284565cece.png

"The codec has many settings that allow you to control the algorithmic delay, number of samples in the block and bit allocation algorithm, but almost always the parameters used in the specification are used everywhere: Joint Stereo, 8 frequency bands, 16 blocks in the audio frame, Loudness bit allocation method."
Source: https://habr.com/en/post/456182/

So, SBC really isn't that bad.

One thing BT developers could do is require other codecs beside SBC, such as mp3, and then the mp3 file would NOT be re-compressed. Most BT devices suppor AAC, so why not mp3? In that instance, you could use a high quality mp3 codec like the LAME compressor, use a bit depth of 320kbps, and be very happy with BT audio. MP3 s still the most used compressor. So, tehre must be some reason mp3 was not adopted for BT, and SBC was.

The only caveat to using atX or aptX HD is that aptX supposedly recompresses files 'better' than SBC, so you can 'hear' a difference. That would be the ONLY reason to use the aptX codec, really.
 
WiFi has no compression at all, no codecs, it's just a 1 to 1 transfer of bits. The reason BT won over WiFi is that WiFi has to have a middle man to connect, which is our routers. That means there isn't really anyway to link a WiFi device directly to another device, without a proprietary dongle - such as a WiFi mouse's dongle. Notice that Bluetooth never really did jack until mobile became a "thing" simply because of as you put it, "garbo" quality. The great thing about WiFi is that you can device-sink any codec, mp3, aac, ogg, FLAC, and it just passing it all along without any compression. Bluetooth codecs, regardless of the encoding used, ALWAYS re-compresses to some degree. It's not suppose to do as little as possible, but it still does it.

I probably need to go back and do more research, but currently my understanding of aptX is that if you don't start with an uncompressed format, such as FLAC, you probably won't hear any difference between aptX HD and SBC at the sink device. It's true that regular old aptX can deliver CD quality music, but it has to start with an lossless format, such as FLAC. (Incidentally, SBC can also do CD quality, which is 44-48kHz at 16bit). That is completely counter to why people would want aptX or any aptX on their phones because they aren't going to be carrying around uncmpressed files (Also, there is the streaming equation that never streams uncompressed). Most mobile centric users don't even know what a codec is, much less the FLAC codec. They stream music at 128kbps, or at most currently, 320kps in mp3, and from my research, their expensive aptX HD headphones don't sound one bit better than the SBC codec (which according to my research, really is pretty decent and gets a bad rap). For instance, if you feed the SBC codec FLAC, it can spit out 350+ kbps, and that's pretty much indistinguishable to the human ear to a non-compressed source. SBC is really quite good: "SBC supports mono and stereo streams, certain sampling frequencies up to 48 kHz. Maximum bitrate required to be supported by decoders is 320 kbit/s for mono and 512 kbit/s for stereo streams." --Wikepedia

and

"SBC is a simple and computationally fast codec with a primitive psychoacoustic model (with simple auditory masking) using adaptive pulse code modulation (APCM).
A2DP specification recommends using two profiles: Middle Quality and High Quality."
View attachment 365147

"The codec has many settings that allow you to control the algorithmic delay, number of samples in the block and bit allocation algorithm, but almost always the parameters used in the specification are used everywhere: Joint Stereo, 8 frequency bands, 16 blocks in the audio frame, Loudness bit allocation method."
Source: https://habr.com/en/post/456182/

So, SBC really isn't that bad.

One thing BT developers could do is require other codecs beside SBC, such as mp3, and then the mp3 file would NOT be re-compressed. Most BT devices suppor AAC, so why not mp3? In that instance, you could use a high quality mp3 codec like the LAME compressor, use a bit depth of 320kbps, and be very happy with BT audio. MP3 s still the most used compressor. So, tehre must be some reason mp3 was not adopted for BT, and SBC was.

The only caveat to using atX or aptX HD is that aptX supposedly recompresses files 'better' than SBC, so you can 'hear' a difference. That would be the ONLY reason to use the aptX codec, really.
Fwiw, FLAC is compressed, but it's lossless so takes a bit more space than other compressed formats. WAV is uncompressed, iirc, and would be huge compared to most FLAC files with the same content.
 
edit to add: the USB latency is a valid concern but yeah I don't see it as being the weak link in a system that involves beaming real-time audio over BT/WIFi
Would it be for something for which the signal is fully continue and knowledgeable in advance like a stream of pre-recorded music ?
 
WiFi has no compression at all, no codecs, it's just a 1 to 1 transfer of bits. The reason BT won over WiFi is that WiFi has to have a middle man to connect, which is our routers.
That is a strange language to me, Wifi is a wireless network protocol and tell us nothing about the compression and it would be extremely strange to stream audio uncompressed (when there is nice lossless compression algorithm for it outthere like FLAC and others)
but currently my understanding of aptX is that if you don't start with an uncompressed format, such as FLAC, you probably won't hear any difference between aptX HD and SBC at the sink device. It's true that regular old aptX can deliver CD quality music, but it has to start with an lossless format, such as FLAC.
About no human is able to listen a difference between FLAC and a good Mp3 and those that train themselve to be able to, can, but it is not one is better than the other one necessarily if it is, just they can catch the mini difference they trainned themselve to do and they will be able to correctly identifiy which is which like 85-90% of the time.

Mobile users have fully access to lossless streamed music on their Iphone now I think (which sound like a pure marketing gimmick to me).
 
WiFi has no compression at all, no codecs, it's just a 1 to 1 transfer of bits. The reason BT won over WiFi is that WiFi has to have a middle man to connect, which is our routers. That means there isn't really anyway to link a WiFi device directly to another device, without a proprietary dongle - such as a WiFi mouse's dongle. Notice that Bluetooth never really did jack until mobile became a "thing" simply because of as you put it, "garbo" quality. The great thing about WiFi is that you can device-sink any codec, mp3, aac, ogg, FLAC, and it just passing it all along without any compression. Bluetooth codecs, regardless of the encoding used, ALWAYS re-compresses to some degree. It's not suppose to do as little as possible, but it still does it.

I probably need to go back and do more research, but currently my understanding of aptX is that if you don't start with an uncompressed format, such as FLAC, you probably won't hear any difference between aptX HD and SBC at the sink device. It's true that regular old aptX can deliver CD quality music, but it has to start with an lossless format, such as FLAC. (Incidentally, SBC can also do CD quality, which is 44-48kHz at 16bit). That is completely counter to why people would want aptX or any aptX on their phones because they aren't going to be carrying around uncmpressed files (Also, there is the streaming equation that never streams uncompressed). Most mobile centric users don't even know what a codec is, much less the FLAC codec. They stream music at 128kbps, or at most currently, 320kps in mp3, and from my research, their expensive aptX HD headphones don't sound one bit better than the SBC codec (which according to my research, really is pretty decent and gets a bad rap). For instance, if you feed the SBC codec FLAC, it can spit out 350+ kbps, and that's pretty much indistinguishable to the human ear to a non-compressed source. SBC is really quite good: "SBC supports mono and stereo streams, certain sampling frequencies up to 48 kHz. Maximum bitrate required to be supported by decoders is 320 kbit/s for mono and 512 kbit/s for stereo streams." --Wikepedia

and

"SBC is a simple and computationally fast codec with a primitive psychoacoustic model (with simple auditory masking) using adaptive pulse code modulation (APCM).
A2DP specification recommends using two profiles: Middle Quality and High Quality."
View attachment 365147

"The codec has many settings that allow you to control the algorithmic delay, number of samples in the block and bit allocation algorithm, but almost always the parameters used in the specification are used everywhere: Joint Stereo, 8 frequency bands, 16 blocks in the audio frame, Loudness bit allocation method."
Source: https://habr.com/en/post/456182/

So, SBC really isn't that bad.

One thing BT developers could do is require other codecs beside SBC, such as mp3, and then the mp3 file would NOT be re-compressed. Most BT devices suppor AAC, so why not mp3? In that instance, you could use a high quality mp3 codec like the LAME compressor, use a bit depth of 320kbps, and be very happy with BT audio. MP3 s still the most used compressor. So, tehre must be some reason mp3 was not adopted for BT, and SBC was.

The only caveat to using atX or aptX HD is that aptX supposedly recompresses files 'better' than SBC, so you can 'hear' a difference. That would be the ONLY reason to use the aptX codec, really.
Thanks for taking the time to share all that! My brain is awash with audio knowledge and I love it. Learning things, woot.
 
Would it be for something for which the signal is fully continue and knowledgeable in advance like a stream of pre-recorded music ?
Potentially, but in a roundabout way? A device chain that's sending and receiving a real-time audio stream (ie buffer is a small fraction of a second) neither knows nor cares whether the audio source is pre-recorded or live, it's just sending & receiving data*** The issue is when the sender skips a beat (pun not intended) and the receiver (pun possibly intended) runs out its tiny buffer and starts missing samples, resulting in audio dropouts and/or distortion. Buuuut that's really more of a Windows issue IME. USB is perfectly capable of sending uninterrupted data streams as far as I can tell (certainly more so than packet-based wireless interfaces), rather it is Windows' frequent inability to create and maintain that uninterrupted audio stream that is an utter dumpster fire. But that's another topic. An optical interface using an ASIO driver would be the least problematic but serious music/audio producers use USB audio interfaces all day every day so there's nothing inherently unsuitable about USB itself for audio, stability-wise.

***Windows 10 often being catastrophically bad at real-time audio processing means that adding live audio input gives things more opportunity to f*ck up, but that's only tangentially related to the physical interface being used to send the audio to the listening device.
 
About no human is able to listen a difference between FLAC and a good Mp3.
Not true.
The equipment being used matters a lot, a poor setup will prevent you hearing how good it should sound when not compressed.
MP3 is missing a lot on a decent system.
I have some old music tracks that are MP3 that I occasionally play but the loss of space and detail is tangible.
 
Last edited:
The quality of the master recording is pretty important–a low quality master will sound bad regardless of the codec used for compression.
 
The quality of audio over USB is very much dependent on the USB implementation in the DAC and whether the DAC chip+circuits are good enough to make use of the benefits.
For DACs that dont have very good USB noise isolation, you can place a device in the path that does the isolation instead.
Also the bitrate you need to send matters.

You can perform extremely high quality upsampling to the max bitrate the DAC will accept with a program like HQPlayer, but not over optical because it maxes at 24/192.
Optical is a very good method for transferring audio data because there are no electrical connections, hence no electrical noise. There can still be noise/jitter in the signal though and optical is bandwidth limited to 24/192KHz stereo.
Other connection methods can be isolated just as well and have higher bandwidth.

My DAC can use USB, BNC, RCA coax, optical (Toslink), AES/EBU and I2S connection methods.
It has proven to give best sound quality over USB because it has incredibly good built in isolation and anti jitter, and even USB 2,0 can cope with extremely high bitrates.
I upsample my music to 24bit 1.5MHz (yep you read right) with HQPlayer, the sound quality is sublime.
(my DAC: https://www.kitsunehifi.com/product/holo-audio-may-dac/)

Every DAC has its own best connection method.
This post is to demonstrate just how good USB can be when implemented well.
 
Last edited:
You're still connecting through USB. The idea was to get away from any noise USB has, inherently.
No.

It's a digital to digital conversion and it's electrically decoupled from my Integrated Receiver which contains the DAC.

Could the USB introduce a few stray errors because it's "noisy"?

Why would this external USB device somehow be more likely to do that than your computer system in general?

Edit: Also, what other options do I have?

1. 3.5mm analog out (from 10feet away, going by a portable A/C, a small NAS and the WiFi router/cable modem

2. ???

This is the device, in case anyone else has a similar situation - it works great and while it says it can do DSD, I just run basic stereo through it and have had zero issues (LS50s, Parasound Integrated Amp, SVS 12" Sealed Sub)
https://www.amazon.com/gp/product/B08HN3VSF8/
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Fwiw, FLAC is compressed, but it's lossless so takes a bit more space than other compressed formats. WAV is uncompressed, iirc, and would be huge compared to most FLAC files with the same content.
It's a good point and yep, FLAC is a compressed format. it just doesn't lose any of the original data. So, you can actually convert it back to a noncompresed format without any loss of data.
 
That is a strange language to me, Wifi is a wireless network protocol and tell us nothing about the compression and it would be extremely strange to stream audio uncompressed (when there is nice lossless compression algorithm for it outthere like FLAC and others)

About no human is able to listen a difference between FLAC and a good Mp3 and those that train themselve to be able to, can, but it is not one is better than the other one necessarily if it is, just they can catch the mini difference they trainned themselve to do and they will be able to correctly identifiy which is which like 85-90% of the time.

Mobile users have fully access to lossless streamed music on their Iphone now I think (which sound like a pure marketing gimmick to me).
You can stream a wav file from your computer to your home stereo if you have a WiFi enabled receiver/amp/DAC. Personally, I wouldn't use wav, but you can also stream CD music from the CD player across WiFi.

That's a really good point, but most people can't hear any difference anyway becasue their hearing has degrade below the 20kHz level in the first place. So no matter how bad it is, if they can't physically hear the difference, then they simply cannot hear it. I've not seen any scientific studies that actually show anyone distinguishing between FLAC and a high quality 320bit MP3 in a statistical way. I'd love to see that verified. "Better" in the audio world is defined as how it sounded when it was originally recorded. Anything that changes the original recording is "badder." lol

Mobile users do not have any such thing because the bandwidth for any sort of Bluetooth is below what is take to stream uncompressed CD quality music. Theoretically, you can get 1500kbps out of BT 5, but you'll never see that. the payload if very much lower, practically speaking. The good news is you don't need 1500kbps files to sound as god as the original files.

I would suggest anyone interested in this subject to read this webiste:
https://habr.com/en/post/456182/

For example:
"To sum up, SBC is a very flexible codec: it can be configured for low latency, gives excellent audio quality at high bitrates (452+ kb/s) and is quite good for most people on standard High Quality (328 kb/s). However, there are a few reasons why the codec is infamous for its low sound quality: A2DP standard does not define fixed profiles (it only gives recommendations), Bluetooth stack developers set artificial limits on Bitpool, the parameters of the transmitted audio are not displayed in the user interface, and headphone manufacturers are free to set their settings and never specify the Bitpool value in technical characteristics of the product.
The bitpool parameter directly affects the bitrate only within one profile. The same bitpool value of 53 can produce both the 328 kbps bitrate with the recommended High Quality profile, and 1212 kbps in Dual Channel mode and 4 frequency bands, which is why the OS authors also set limits on bitrate in addition to bitpool. I assume the situation arose due to the flaw in the A2DP standard: it was necessary to negotiate the bitrate, not bitpool."
 
The quality of audio over USB is very much dependent on the USB implementation in the DAC and whether the DAC chip+circuits are good enough to make use of the benefits.
For DACs that dont have very good USB noise isolation, you can place a device in the path that does the isolation instead.
Also the bitrate you need to send matters.

You can perform extremely high quality upsampling to the max bitrate the DAC will accept with a program like HQPlayer, but not over optical because it maxes at 24/192.
Optical is a very good method for transferring audio data because there are no electrical connections, hence no electrical noise. There can still be noise/jitter in the signal though and optical is bandwidth limited to 24/192KHz stereo.
Other connection methods can be isolated just as well and have higher bandwidth.

My DAC can use USB, BNC, RCA coax, optical (Toslink), AES/EBU and I2S connection methods.
It has proven to give best sound quality over USB because it has incredibly good built in isolation and anti jitter, and even USB 2,0 can cope with extremely high bitrates.
I upsample my music to 24bit 1.5MHz (yep you read right) with HQPlayer, the sound quality is sublime.
(my DAC: https://www.kitsunehifi.com/product/holo-audio-may-dac/)

Every DAC has its own best connection method.
This post is to demonstrate just how good USB can be when implemented well.
Indeed, so much more how well the BT specifications will sound. Not only the DAC but the BT stack as well. We don't have to deal wiht anyf it using WiFi. Well, the dongle and or electronics in the transmitter and sink.

Interesting what you said about USB being better than RCA, 3.5mm, or optical. You mean USB from your Source to DAC, but not connected to your computer?
 
No.

It's a digital to digital conversion and it's electrically decoupled from my Integrated Receiver which contains the DAC.

Could the USB introduce a few stray errors because it's "noisy"?

Why would this external USB device somehow be more likely to do that than your computer system in general?

Edit: Also, what other options do I have?

1. 3.5mm analog out (from 10feet away, going by a portable A/C, a small NAS and the WiFi router/cable modem

2. ???

This is the device, in case anyone else has a similar situation - it works great and while it says it can do DSD, I just run basic stereo through it and have had zero issues (LS50s, Parasound Integrated Amp, SVS 12" Sealed Sub)
https://www.amazon.com/gp/product/B08HN3VSF8/
Really interesting and I think that was what Nenu was talking about too? In both cases, the USB package is removed from the PC environment? In other words, instead of using RCA, 3.5, optical, it uses a USB interface? If that is correct, yeah, I get it. It seems like overkill for audio though. However, USB cables are usually very well insulated from EMF, so yeah. Interesting, thanks.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Just wanted to say the main reason i posted about USB quality was to plug in an aptX transmitter for a aptX sink, then connected to my amp using a 3.5 jack. I was just wondering about noise from the computer USB interface.
 
Indeed, so much more how well the BT specifications will sound. Not only the DAC but the BT stack as well. We don't have to deal wiht anyf it using WiFi. Well, the dongle and or electronics in the transmitter and sink.

Interesting what you said about USB being better than RCA, 3.5mm, or optical. You mean USB from your Source to DAC, but not connected to your computer?
From PC. My source is my PC.
Before I bought the Holo May KTE, Magna Hifi confirmed USB was better than even I2S with the Holo May, using a custom USB to I2S converter, and a review I read verified the USB quality as well.
USB input is PC only on my DAC, it will not accept USB pens etc.
3.5mm doesnt figure in this and RCA is the wired version of optical, I am only discussing inputs to the DAC, not output.

Its not that USB is better for noise prevention, its what you can (or are prepared to) do to keep ground/power noise at bay, making it as good as optical, and remove jitter (which can also be implemented on optical).
USB has way higher data rates which is why a lot more effort is put into it by mfrs.

Before I discovered HQPlayer (for high quality upsampling), I had an Oppo 105 and 205 and tested their USB vs Optical vs HDMI 'input' for how good they sound (ie detail/separation/soundstage and harshness/mellowness ...).
Optical won by a whisker up to 24/96 by being slightly more laid back/mellower (my cables wouldnt do 192KHz without issue), USB was better at higher data rates up to 24/192 and with DSD due to higher detail and the benefits that brings.
HDMI (from PC) sounded pretty rough in comparison, its not a good solution for music audio even though you might want to use it for multichannel. Too much noise/jitter and not much to prevent it.
But because the Oppo players can directly play music/video from Disc, a USB pen/drive or over a wired network, I was able to compare each input to the very best the Oppo can do.
Optical and USB input were very close to the best such that I didnt care for the difference, they were low significance niggles tbf.
I always played multichannel music/video with a Disc or USB pen, plugged directly into the Oppo. No cables or HDMI needed.

Sad thing, Oppo no longer make these high end DAC/players and newer DACs that sound simply incredible have been released since, but they are stereo only.
I havent come across any multichannel DACs that can beat the Oppo 205s sound quality yet. But for stereo, my Holo May KTE is much preferred, and with high quality upsampling its another world.
So I dont use an Oppo any more because most of my collection is stereo.

Apologies for the waffle, there are a few points I couldnt make otherwise.
 
Last edited:
Just wanted to say the main reason i posted about USB quality was to plug in an aptX transmitter for a aptX sink, then connected to my amp using a 3.5 jack. I was just wondering about noise from the computer USB interface.
This is the one I use, but I use bluetooth to optical and the optical goes into my matrix switch.

This also has 3.5mm out and there is a version with a higher quality 3.5mm DAC output.

https://www.amazon.com/gp/product/B07D1JJBJR/

1623525357679.png
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Not true.
The equipment being used matters a lot, a poor setup will prevent you hearing how good it should sound when not compressed.
MP3 is missing a lot on a decent system.
I have some old music tracks that are MP3 that I occasionally play but the loss of space and detail is tangible.
The question is can humans tell the difference between a well compressed 320kbps mp3 and an uncompressed audio file. My assumption was that people would be listening to those files using the best lab equipment available without any other outside noise interfering with the test, and all ther things equal, of course. What I have read so far says, nope. If you are in a room that has noise and you are using non-scientific equipment, you probably will not notice any difference at all, but that's beside your point, since you state that mp3 is "missing alot" to a point it is noticeable to listeners on good equipment.

According to this recent lab audio test, not true. But, what is interesting in this test is that they use MP3 at 192 kbps - not 320! Still, no statistical difference audible):

==============

Research Article | Open Access Volume 2019 |Article ID 8265301 | [URL]https://doi.org/10.1155/2019/8265301[/URL] Stuart Cunningham, Iain McGregor, "Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM", [I]International Journal of Digital Multimedia Broadcasting[/I], vol. 2019, Article ID 8265301, 16 pages, 2019. https://doi.org/10.1155/2019/8265301 Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM Stuart Cunningham and Iain McGregor Centre for Advanced Computational Science, Manchester Metropolitan University, Manchester M1 5GD, UK 2School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK Academic Editor: Wanggen Wan Received03 Feb 2019 Revised30 May 2019 Accepted17 Jun 2019 Published11 Jul 2019

[My excerpt]

Begin Quote--

Use of a listening test methodology such as ITU-R BS-1116 [34] or Multiple Stimulus Hidden Reference and Anchors (MUSHRA) [35] would have been a feasible approach. However, such approaches require study participants to be expert listeners who are proficient at detecting small differences in audio quality. Whilst the use of expert listeners is intended to ensure reliable results, it does not accurately reflect the broader population, which has a much greater level of variation with regard to their perception of audio quality. Based upon this, a custom approach was adopted and it was decided to use untrained listeners in the study.​
Participants were provided with the opportunity to hear a short (20 s) sample from the 10 selected songs. Each was played back repeatedly until the participant completed their response or wished to move on. They were able to hear six versions of each song: uncompressed WAV, MP3 192 kbps CBR, AAC 192 kbps CBR, ACER low quality, ACER medium quality, and ACER high quality. Each sample was played back concurrently and fed in random order into a Canford Source Selector HG8/1 hardware switch, allowing participants to freely select which sample stream they were listening to using a simple rotary switch.​
Enclosed Beyer Dynamic DT770M 80-ohm headphones were chosen for the study as they have a passive ambient noise reduction of 35 dB, according to the manufacturer’s specification. A Rane HC6S headphone amplifier was set so that the RMS level was 82 dBC, broadly in accordance with the reference level recommended by the ITU-R [29, 34], and with a peak of 95 dBC. Music is the most popular media form for headphone use with high levels of adoption and regular use [36, 37]. Headphones are reported as being the second equal most popular method after computer speakers for the consumption of music [38].​
The use of headphones also minimised the effect of any room acoustic colouration, which are known to affect listening studies [39]. They also potentially facilitate a greater level of detail due to driver proximity and minimal cross-talk. It is acknowledged that the stereo image experienced when using headphones will differ from that of loudspeakers. Nevertheless, when using headphones, the listener experiences the sound as being perceptually from the exterior world [40]. It has been found that there is little difference between studio loudspeakers and studio quality headphones in audio evaluation situations; both MUSHRA [41] and ITU-R standards for listening tests endorse use of either headphones or loudspeakers [29, 34].​
H1: The perceived differences in audio quality, in terms of noise and distortion, between uncompressed WAV, AAC, MP3, and ACER music samples are insignificant.​
H2: The perceived differences in audio quality, in terms of audio stereo imaging, between uncompressed WAV, AAC, MP3, and ACER music samples are insignificant.​
End Quote--

(They mean statistically insignificant)

Source: https://www.hindawi.com/journals/ijdmb/2019/8265301/
 
Last edited:
The question is can humans tell the difference between a well compressed 320kbps mp3 and an uncompressed audio file. My assumption was that people would be listening to those files using the best lab equipment available without any other outside noise interfering with the test, and all ther things equal, of course. What I have read so far says, nope. If you are in a room that has noise and you are using non-scientific equipment, you probably will not notice any difference at all, but that's beside your point, since you state that mp3 is "missing alot" to a point it is noticeable to listeners on good equipment.

According to this recent lab audio test, not true. But, what is interesting in this test is that they use MP3 at 192 kbps - not 320! Still, no statistical difference audible):

==============

Research Article | Open Access Volume 2019 |Article ID 8265301 | [URL]https://doi.org/10.1155/2019/8265301[/URL] Stuart Cunningham, Iain McGregor, "Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM", [I]International Journal of Digital Multimedia Broadcasting[/I], vol. 2019, Article ID 8265301, 16 pages, 2019. https://doi.org/10.1155/2019/8265301 Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM Stuart Cunningham and Iain McGregor Centre for Advanced Computational Science, Manchester Metropolitan University, Manchester M1 5GD, UK 2School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK Academic Editor: Wanggen Wan Received03 Feb 2019 Revised30 May 2019 Accepted17 Jun 2019 Published11 Jul 2019

[My excerpt]

Begin Quote--

Use of a listening test methodology such as ITU-R BS-1116 [34] or Multiple Stimulus Hidden Reference and Anchors (MUSHRA) [35] would have been a feasible approach. However, such approaches require study participants to be expert listeners who are proficient at detecting small differences in audio quality. Whilst the use of expert listeners is intended to ensure reliable results, it does not accurately reflect the broader population, which has a much greater level of variation with regard to their perception of audio quality. Based upon this, a custom approach was adopted and it was decided to use untrained listeners in the study.​
Participants were provided with the opportunity to hear a short (20 s) sample from the 10 selected songs. Each was played back repeatedly until the participant completed their response or wished to move on. They were able to hear six versions of each song: uncompressed WAV, MP3 192 kbps CBR, AAC 192 kbps CBR, ACER low quality, ACER medium quality, and ACER high quality. Each sample was played back concurrently and fed in random order into a Canford Source Selector HG8/1 hardware switch, allowing participants to freely select which sample stream they were listening to using a simple rotary switch.​
Enclosed Beyer Dynamic DT770M 80-ohm headphones were chosen for the study as they have a passive ambient noise reduction of 35 dB, according to the manufacturer’s specification. A Rane HC6S headphone amplifier was set so that the RMS level was 82 dBC, broadly in accordance with the reference level recommended by the ITU-R [29, 34], and with a peak of 95 dBC. Music is the most popular media form for headphone use with high levels of adoption and regular use [36, 37]. Headphones are reported as being the second equal most popular method after computer speakers for the consumption of music [38].​
The use of headphones also minimised the effect of any room acoustic colouration, which are known to affect listening studies [39]. They also potentially facilitate a greater level of detail due to driver proximity and minimal cross-talk. It is acknowledged that the stereo image experienced when using headphones will differ from that of loudspeakers. Nevertheless, when using headphones, the listener experiences the sound as being perceptually from the exterior world [40]. It has been found that there is little difference between studio loudspeakers and studio quality headphones in audio evaluation situations; both MUSHRA [41] and ITU-R standards for listening tests endorse use of either headphones or loudspeakers [29, 34].​
H1: The perceived differences in audio quality, in terms of noise and distortion, between uncompressed WAV, AAC, MP3, and ACER music samples are insignificant.​
H2: The perceived differences in audio quality, in terms of audio stereo imaging, between uncompressed WAV, AAC, MP3, and ACER music samples are insignificant.​
End Quote--

(They mean statistically insignificant)

Source: https://www.hindawi.com/journals/ijdmb/2019/8265301/

Its always amusing reading such reports, they cannot change my experience with my music.
I dont mind or care about the results they found, what I hear is all that matters to me.
If thats not enough for you, I dont mind, you'll make your own mind up about it eventually.

We were able to easily tell the difference before my soundcard days ended (well over 10 years ago), it didnt need high end kit, it wasnt rubbish equipment though.
The difference with high end kit I bought/built since then is even more pronounced.

Sadly HQPlayer wont accept MP3s so a direct comparison cant be done there without more fiddling about.
But even with Foobar its easy to notice.
 
Last edited:
Well, yeah, but you're using your own equipment.
Well yes for me, that why I invited the person to try it if if they want to see how much of a difference they can hears on their (if it is not that they had badly made old or low bitrate mp3).
 
Its always amusing reading such reports, they cannot change my experience with my music.
I dont mind or care about the results they found, what I hear is all that matters to me.
If thats not enough for you, I dont mind, you'll make your own mind up about it eventually.

We were able to easily tell the difference before my soundcard days ended (well over 10 years ago), it didnt need high end kit, it wasnt rubbish equipment though.
The difference with high end kit I bought/built since then is even more pronounced.

Sadly HQPlayer wont accept MP3s so a direct comparison cant be done there without more fiddling about.
But even with Foobar its easy to notice.
It has nothing to do with me personally, or my subjective opinion. The simple explanation for your beleif is that you didn't conduct your tests in a double blind and controlled manner, that is, using the scientific method. Otherwise, your subjective experience may prove itself to you objectively (sceintifically) that you actually have not heard the difference.

"Vitalism states that the functions of living things are controlled by a “vital force” and not biophysical means. . . .In 1967, Francis Crick, the co-discoverer of the structure of DNA, stated “And so to those of you who may be vitalists I would make this prophecy: what everyone believed yesterday, and you believe today, only cranks will believe tomorrow.”" https://listverse.com/2009/01/19/10-debunked-scientific-beliefs-of-the-past/
 
It has nothing to do with me personally, or my subjective opinion. The simple explanation for your beleif is that you didn't conduct your tests in a double blind and controlled manner, that is, using the scientific method. Otherwise, your subjective experience may prove itself to you objectively (sceintifically) that you actually have not heard the difference.

"Vitalism states that the functions of living things are controlled by a “vital force” and not biophysical means. . . .In 1967, Francis Crick, the co-discoverer of the structure of DNA, stated “And so to those of you who may be vitalists I would make this prophecy: what everyone believed yesterday, and you believe today, only cranks will believe tomorrow.”" https://listverse.com/2009/01/19/10-debunked-scientific-beliefs-of-the-past/
I have and this discussion is closed.
I dont care what "you" think or believe :)
 
Back
Top