24/96 tracks

A DAC oversamples anyway...you don't get the squared waveform that you see in an audio editor. So to feed more data to the DAC (data that was actually captured, not falsely calculated) would be letting the DAC have to calculate and assume less. This is what higher sample rates and bit depths do, but people still bible-grip their 16/44.1 math and try to apply it everywhere else.
 
In that case I would strongly suggest that you stop listening to digital audio of any kind.
 
So if you have 2 bit audio you could not play a tone loud enough where it could shatter your ear drum and also have complete silence from that format? Because then you would only have a 12 dB dynamic range? This is what bugs me about everyone's dynamic range propaganda. It's complete BS.
 
So if you have 2 bit audio you could not play a tone loud enough where it could shatter your ear drum and also have complete silence from that format? Because then you would only have a 12 dB dynamic range? This is what bugs me about everyone's dynamic range propaganda. It's complete BS.

even with two bit tones, a decent DAC can extrapolate a nice looking analoug wave from that :p After that, your speakers will likely smooth out the rest, unless if they are massless. If you have massless drivers, then line loss from cables, TRS jacks, or ever screwed in connectors will change the audio around, even teh air mass between you and the speaker/headset will have an effect. If not that, then the people mastering the tracks don't have the equipment to give "perfect" representation of what an instrument sounds like, or a vioce sounds like. Or, they are using a synth board, and it's all moot to begin with.

Either way, I gave up in caring. I got out before the cable voodoo people (unrelated) really started showing up in numbers. I guess I enjoy the content more than the system is plays on, anymore (not to say I will accept a hissy audio source) :p
 
I'm not talking about extrapolating a nice wave, I'm talking about dynamic range. It's resolution, not so much the difference between silence and deafening loud. Until these mathematicians step back and take a common sense look at things I have no time of day for them. A 2 bit format could give absolute silence, or absolute loud, but nothing in between (assuming you choose a tone that's a frequency matching a number that has to do with the sample rate). All this nyquist BS and audio assumes that all sound signals are shaped in a sinusoidal manner.
 
So if you have 2 bit audio you could not play a tone loud enough where it could shatter your ear drum and also have complete silence from that format?
You could, sure.

All this nyquist BS and audio assumes that all sound signals are shaped in a sinusoidal manner.
It only assumes that the data is represented in a particular manner. The theorem applies to the digital representation of an audio signal in which the representation is based on the mapping of values to the periodic nature of sound waves. The shape of the waveform is entirely irrelevant. It could be a sine wave, a sawtooth wave, a square wave or a complex wave: it makes no difference to the theorem.
 
Keep thinking your bullshit 16/44.1 is enough. Stay off my earth with your keeping technology stifled.
 
grantorino.jpg
 
Take a deep breath there, bud. There is some good info in this thread, why not keep it civil?
 
I'm not talking about extrapolating a nice wave, I'm talking about dynamic range. It's resolution, not so much the difference between silence and deafening loud. Until these mathematicians step back and take a common sense look at things I have no time of day for them. A 2 bit format could give absolute silence, or absolute loud, but nothing in between (assuming you choose a tone that's a frequency matching a number that has to do with the sample rate). All this nyquist BS and audio assumes that all sound signals are shaped in a sinusoidal manner.

The thing is, unless you're doing oversampling for pulse density modulation like in DSD SACDs - you introduce massive amounts of quantization noise with such a bit rate.

What happens is that since samples of the waveform are quantized, once they get low enough they just become rounded to zero. So sounds below a certain level just get cut off. They're not really zero, but your ADC or software resampling treats them as such - and you've reached the point at which you can no longer meaningfully measure differences. (You don't argue that you can use a car's odometer to measure micrometers even though it goes to zero, do you?)

The same applies for all the intermediate steps - since we're dealing with a summation of sine waves - and FYI, all your square, sawtooth, etc. waves are really an infinite series of sine waves - small changes happen not only around the 0 V level, but throughout the entire range. When you can't represent the finer gradations of, say, a mf flute playing alongside a fff trombone because the quantization steps are too far apart, you've lost that low-level detail and thus the dynamic range of the material. The difference between the analog input and the quantized output is the quantization error - it's an unwanted random noise signal - and it is what masks low level details (and when large enough to mask audible details, it's creating audible artifacts).

To quote Ethan Winer:

However, distortion is related to bit depth. At really low bit depths the distortion is higher for the reason you state. This is why an 8-bit file doesn't just have more hiss than a 16- or 24-bit file, but also sounds more gritty and grainy. But a 16 bit file has much lower distortion than any loudspeaker you'll listen through.

The point is that the dynamic range is expanded downward in terms of the minimum representable voltage change, as the maximum change is fixed at line level. This is in opposition to normal real-world measurements, like sound in real life, that extends to infinity in magnitude (hypothetically/theoretically as far as you can increase pressure) and stepwise. Note that analog audio is limited stepwise just as digital is by the ability to accurately record/store/reproduce a given signal difference.

So, you've massively reduced the difference between the largest and the smallest meaningful signals you can measure/store/reproduce: in other words, the dynamic range has been reduced. Thankfully, 16 bit audio is so good that we never run into its limits in the real-world (except for mastering and mixing, where it definitely makes sense to use higher bit depths.






Keep thinking your bullshit 16/44.1 is enough. Stay off my earth with your keeping technology stifled.

The thing is, it is enough. Show me ANY large scale, peer-reviewed (or at the very least, peer reviewable and reproducible with fully explained methodology) blind test conducted at normal listening levels that shows otherwise. So far, there's several large-scale blind tests showing the differences are inaudible and none showing they are audible at normal listening levels.

Here's one of them: Over 500 trials, showing the inaudibilty of a 16/44.1 ADC/DAC loop inserted in a high-res digital audio system: http://drewdaniels.com/audible.pdf

It doesn't matter if higher bit/sample rates are superior mathematically and measurably (they are, undoubtedly) if we can't hear the difference. End of story. No point in wasting resources on them, although certainly testing to ensure that the differences are transparent is worth it. (You could conduct your own large-scale blind test if you'd like - but it must be reviewable and repeatable.) In the future, when we've gone to gigapixel displays and the increased size of high res audio files are meaningless, sure, it can't hurt to have them (high res audio). But for now, there's a whole lot of drawbacks to dealing with high res music files and absolutely NO proven intrinsic value to the format other than reduced noise at extremely loud playback levels.

Even if there are audible differences - and with extremely loud playback, we do know that there are some times when there are, and further exhaustive testing may possibly reveal other very small differences - they're insignificant compared to the real limits in reproduction of music: loudspeakers, listening rooms, and recording techniques. THOSE are what need to be improved first, not our "primitive" audio recording format. They indisputably make HUGE impacts on the quality of the sound, entirely unlike high res audio.

Look, I'm not trying to promote "no improvement". In fact, I'm trying to promote it where it's important and not waste time trying to improve things that have extremely little or no benefit at all - like high resolution audio. That's what I'm about. Efficient allocation of resources. To impede this because you can't get over your biases to see the math and science - including scientifically conducted listening tests - behind these conclusions is to do a disservice to both yourself, others, and the hobby as a whole.
 
Last edited:
So if you have 2 bit audio you could not play a tone loud enough where it could shatter your ear drum and also have complete silence from that format? Because then you would only have a 12 dB dynamic range? This is what bugs me about everyone's dynamic range propaganda. It's complete BS.

Actually, you can get full dynamic range out of a 1 bit DAC, and that has been used in commercial products (I have an old DAT recorder that has a 1bit dac that is resampled digitally).

There is an analog circuit that converts the amplitude of a signal into a square wave of varying frequency. The DAC itself is sampled at several megahertz. This can then be manipulated into a more standard format (multiple bits at a lower sampling rate).

I'm not talking about extrapolating a nice wave, I'm talking about dynamic range. It's resolution, not so much the difference between silence and deafening loud. Until these mathematicians step back and take a common sense look at things I have no time of day for them. A 2 bit format could give absolute silence, or absolute loud, but nothing in between (assuming you choose a tone that's a frequency matching a number that has to do with the sample rate). All this nyquist BS and audio assumes that all sound signals are shaped in a sinusoidal manner.

Uh, actually, they ARE shaped in a sinusoidal manner. Every complex waveform can be broken down into a combination of sine waves. That's how frequency domain analysis works - you take a complex signal and break it down into a bunch of signals (of varying amplitudes) at different frequencies. The spectrum view you get in foobar? Each little bar is just the amplitude of the corresponding sinusoidal signal.

All complex waveforms are based upon sinusoids. That's why clippers introduce distortion - when they're active, they turn the audio into a square wave - which is just a sine wave with harmonics added in (those harmonics are the source of the high frequency noise).
 
Exactly.

To look at this from another angle, how can an LP contain complex waveforms? The stylus can only be in one position at any given moment, yet it can reproduce, e.g., a bass & a trumpet playing different notes at the same time. (The same "problem" exists with a speaker's drivers.) It's all about multiple sources combining to produce a single - constantly varying - result.
 
Back
Top