cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,076
Wytse Gerichhausen from White Sea Studios has conducted a scientific analysis of the original master file for Hibshi - Missing U (feat. Rochelle), one of his music tracks that he personally mixed and mastered, to the same song on Apple Music, Spotify, Tidal and YouTube. He then recorded the song from each streaming service as a WAV file and normalized the levels because music services lower the volume of the music. Next he used a combination of A/B testing and phase swapping to compare the differences in the WAV files.

The phase swapping allows the viewer to hear the "missing" information from the track that was either compressed or skipped over by the algorithms used by music streaming services. Tidal won by retaining 99.9% of the quality in the engineer's opinion. Spotify High and Low came next. Apple Music and YouTube came in dead last as both significantly altered the music. He then questioned why he can stream 4K video, but a simple audio file has to be compressed to the point where the quality is completely lost.

Isn't this a little bit over-exaggerated? Isn't this a bit too much like, like too much into the details? Yes, and no. So first of all if you compare them both with each other the details are pretty small. But our ears have a special feature built into them and that is that they can make up for mistakes in audio. Pretty easy. Pretty simple. The thing is that if your ears are correcting a lot for the audio that they are hearing, they are getting tired sooner. And this is bad news for listeners, but also for artists because we don't want people to get tired of our music. Or do we? That's what happening with streaming services.
 
Last edited:
No Google Music tested?

I would have liked to see Google Play Music thrown into the mix, as well...curious to see how their AAC sound quality is when set to "high" compared to the other music streaming services.
 
Too bad he had to turn it into a video.

Would have been much better as a written article!
Yes, it would have been great if it was a written article with a video demonstrating the differences. :)

I'm happy to see that the guys are interested in it. As soon as I hear Spotify, I get sad because I'm used to Tidal. When I listen to YouTube, I just remember that it is free so beggars can't be choosers. Also a lot of concert footage ends up there from artists that have passed on, so again I have to just deal with the terrible sound quality.

I agree with others that Tidal, Spotify, and others need to increase the amount of music found on their services. Or someone needs to make a repository of lossless music for future generations. :)
 
This is so flawed:

1: his original looks like the usual crap master with removed dynamic range and over clipping to make it sound loud... so already ther his is having a hard time to be anywher qualified for this
2: Then his evidence is lacking any kind of understanding how music data compression works. he is trying to show you what is missing but that not the important part. the important part is how it sounds compared to the original and THAT IS NOT THE SAME.
3: total lack of simple but objective and scientific correct ABX testing
4: He spoke about vinyl.... get out....


I probably missed a'lot because i could not watch this to the end.
his test methodology is just to flawed to be interesting
 
I would have liked to see Google Play Music thrown into the mix, as well...curious to see how their AAC sound quality is when set to "high" compared to the other music streaming services.

That, and I would like to see him test the option Google music has of uploading your own music for streaming, to see how much compression goes on there as well.
 
Yes, it would have been great if it was a written article with a video demonstrating the differences. :)

I'm happy to see that the guys are interested in it. As soon as I hear Spotify, I get sad because I'm used to Tidal. When I listen to YouTube, I just remember that it is free so beggars can't be choosers. Also a lot of concert footage ends up there from artists that have passed on, so again I have to just deal with the terrible sound quality.

I agree with others that Tidal, Spotify, and others need to increase the amount of music found on their services. Or someone needs to make a repository of lossless music for future generations. :)

I find Spotify on the extreme quality setting (320kbit Ogg Vorbis, which is VBR by its nature) to be completely indistinguishable from an uncompressed source.

I considered Tidal for a bit, but the catalogue is small and the price is high compared to Spotify, so I decided against it after some SQ listening comparison between the extreme setting and my CD's.
 
I can't view the video right now. Did he say which quality setting he used for Spotify?
 
What an amateur.. I'm only an indie musician and yet I heard all the differences he could not. He pronounced .wav as "wauv". Are you sure THIS is an audio "engineer"?
 
I can't view the video right now. Did he say which quality setting he used for Spotify?
High and Low. He said that both were fatiguing to his ears. Here is the controversial part. He said that the difference between High and Low were insignificant to him. He immediately appreciated the Tidal stream and had no complaints about it.

When you get home the video is really cool because he A/B swaps between the original master and the music service being tested in real time.

Then he phase swaps the tracks. This means that you hear the sounds that are different between the master and the music service stream. So if the music service cuts the highs out, then you hear that. If it cuts the mids then you hear that. If it cuts the vocals then you hear that. Etc.

Apple Music and YouTube sound atrocious. I don't want to say more because you and others can't watch it because of work. You're going to be surprised how much of the music your ears have to correct for to make those services listenable.

:)
 
Last edited:
This is so flawed:

1: his original looks like the usual crap master with removed dynamic range and over clipping to make it sound loud... so already ther his is having a hard time to be anywher qualified for this
2: Then his evidence is lacking any kind of understanding how music data compression works. he is trying to show you what is missing but that not the important part. the important part is how it sounds compared to the original and THAT IS NOT THE SAME.
3: total lack of simple but objective and scientific correct ABX testing
4: He spoke about vinyl.... get out....


I probably missed a'lot because i could not watch this to the end.
his test methodology is just to flawed to be interesting

Came here to say this... can't take anyone who masters something that clips seriously :(
 
Apple Music and YouTube sound atrocious. I don't want to say more because you and others can't watch it because of work. You're going to be surprised how much of the music your ears have to correct for to make those services listenable.

I still listen to FM radio... listenable is relative :D
 
Came here to say this... can't take anyone who masters something that clips seriously :(

Sadly, this is a common technique.

People have come to associate clipping, or the "blown speaker effect" with loudness, and people subconsciously always think louder sounds better, so this is taken advantage of in mixes all over the place.

It's similar to the poor practice of overly compressing dynamic range used in the "loudness war". It's a real shame.
 
He then recorded the song from each streaming service as a WAV file and normalized the levels because music services lower the volume of the music. Next he used a combination of A/B testing and face swapping to compare the differences in the WAV files.


"face swapping" - really?
 
  • Like
Reactions: dgz
like this
Streaming 4K quality isn't like the quality of 4 1080p streams put together. It's definitely more lossy as a whole because our eyes are more tolerant to noise with smaller pixels. Same could be said for music, and for most of us past our 30's we can't hear the upper octave easily as our sensitivity has already started to roll off.

But, I do agree with his sentiments. Audio quality is such low bandwidth it should be one of the last things we trim as the streaming rate adjust.
 
He then recorded the song from each streaming service as a WAV file and normalized the levels because music services lower the volume of the music. Next he used a combination of A/B testing and face swapping to compare the differences in the WAV files.


"face swapping" - really?
He called it face swapping. I'm not an engineer so I went with what he said. :) If I misheard the term that he used then I apologize.
 
What an amateur.. I'm only an indie musician and yet I heard all the differences he could not. He pronounced .wav as "wauv". Are you sure THIS is an audio "engineer"?
I am pretty sure English is not his 1st language.
 
Should I change the front page to say "Phase swapping?" Even the CC says "face swapping," but if you'll think it is "phase swapping" I will edit it!
 
Neither is mine.. Hearing and learning is not a language barrier.
but some people get stuck with their native accent when speaking English so words don't always come out right.
 
Whatever you guys think about his methodology, is anyone really surprised at the outcome? Looks like the services rank about where you know they would based on the compression (or in tidal’s case, lack of) methods.

Tidal is the only service I’ve paid for, and I’ve been a subscriber for 4 years now, and I’m happy with it. As another poster mentioned, one of its downfalls is the lack of content and it’s crappy UI (both desktop and mobile).
 
Since a large number of folks use bluetooth headsets/buds and bluetooth is crap for audio...what's the difference? MP3 over BL = garbage.
 
Technically it is your brain that fills in the missing information in the sine waves, not your ears. But the point stands.

This is so flawed:

1: his original looks like the usual crap master with removed dynamic range and over clipping to make it sound loud... so already ther his is having a hard time to be anywher qualified for this

He said it was normalized to 0db. That means that the loudest part in the entire track is at the peak volume the encoding can handle without it clipping. If it was clipping, there would be square waves all over, and it would sound really bad. I did notice the red indicator in his software on the original master...

2: Then his evidence is lacking any kind of understanding how music data compression works. he is trying to show you what is missing but that not the important part. the important part is how it sounds compared to the original and THAT IS NOT THE SAME.

What is missing is the best audible example of the differences..

3: total lack of simple but objective and scientific correct ABX testing
4: He spoke about vinyl.... get out....

Waah?

I think it would have been better if it was longer stretches of each version, he flipped back and forth kinda fast. But, since it is a youtube video, we were not hearing what he was hearing in any event.

A bit more information would have been helpful, such as the bit rate of the master? 96khz?, 48khz? cd?(44.1khz)
 
Brickwalled modern recordings with zero dynamics will sound pretty much the same.

The thing is 90% of folks 'producing' music on their macbooks have zero knowledge of dynamics and how digital recording actually works.
 
I think the differences would be more noticeable on a more complex track, like something from Nine Inch Nails. The track used has limited complexity to it, not only in the instruments used, but in the stereo/spacial separation too.

While his method is something I can get behind, it would have been a better demonstration with a not simple song like he was using. No knock on him, but that song has almost no complexity to it.
 
What is missing is the best audible example of the differences..



Waah?

I think it would have been better if it was longer stretches of each version, he flipped back and forth kinda fast. But, since it is a youtube video, we were not hearing what he was hearing in any event.

A bit more information would have been helpful, such as the bit rate of the master? 96khz?, 48khz? cd?(44.1khz)


The diffrence compared to the original is not as important as how close the sound to each other.
Those two things are diffrenet if you know how psychoaccustic encoders works.

aka a part might be very clearly heard when isolarted makinf you belive you are missing a lot. when in fact due to masking effect it was barely audiable in the original.
these kind of maskines effect is what determines what is getting "removed" from the song. you dont have that int account when doing a comparisons this stupidlye


Also do you not know what an ABX test is? ( from the "Waah?" comment) its really a basic tools to establsih objectively if you can hear a diffrennce, without having Placbo affecting you.
His method is full of placebo
 
I think the differences would be more noticeable on a more complex track, like something from Nine Inch Nails. The track used has limited complexity to it, not only in the instruments used, but in the stereo/spacial separation too.

While his method is something I can get behind, it would have been a better demonstration with a not simple song like he was using. No knock on him, but that song has almost no complexity to it.
He used that song since he was the one that mixed and mastered it and had the uncompressed master.
 
Well then, count me as a person who couldn't tell a damn bit of difference. Then again I'm listening with Realtek ALC1220 and a pair of wired Sony MDR-XB600 headphones so who knows, it could be because I'm a peasant listening with Realtek audio.

I listened to the song again... damn, I can't help but to hear that damn autotuner. I hate that damn thing. It's gotten to the point that why do we even have "singers" anymore, just have the computer sing the song since it's basically the computer that sang it after it's been completely fucked with by the autotuner.
 
Well then, count me as a person who couldn't tell a damn bit of difference. Then again I'm listening with Realtek ALC1220 and a pair of wired Sony MDR-XB600 headphones so who knows, it could be because I'm a peasant listening with Realtek audio.

I listened to the song again... damn, I can't help but to hear that damn autotuner. I hate that damn thing.

I have a set of Alesis monitors and a Soundblaster ZxR and I could tell a difference with the Apple Music and Youtube versions he was comparing, the others sounded pretty much the same to me,

IMG_2462.JPG
 
I think the differences would be more noticeable on a more complex track, like something from Nine Inch Nails. The track used has limited complexity to it, not only in the instruments used, but in the stereo/spacial separation too.

While his method is something I can get behind, it would have been a better demonstration with a not simple song like he was using. No knock on him, but that song has almost no complexity to it.

Yeah. I noticed that heavier/more complex music has a higher .flac bitrate, and (subjectively) won't compress nearly as well.

I would have liked to see Google Play Music thrown into the mix, as well...curious to see how their AAC sound quality is when set to "high" compared to the other music streaming services.

Oh, it's AAC? I thought it was still MP3. TBH, I'd love a music service that encodes music in Opus, which would stream better than Tidal on the road....

Anyway, Google Play's max quality is all over the map for me. Mostly it's fine, but I've run into more obscure songs that clearly sound like they're 128kbps, even on the max quality setting. Then if I download those few songs, the newest files that pop up on my Android phone's SD card seem suspiciously small.
 
What about Napster?

Or limewire, where a simple song can take over your computer, steal your identity, shoot your dog, and burn your house down.
 
Back
Top