Support |
This is actually one of the topics I'm about writing up on in a series of
technical articles, starting mid january on adadepot.com .
Sorry for the lengthy post, but it's difficult to explain in a short post.
Sampling rate:
Many audio interfaces simply aren't build to function equally correctly at
both low and high sampling rates, and thus may work better at the higher
sample rates.
For this reason some/many says that sampling at higher rates produce
better audio quality with their particular interface.
My comment to reality is: If the chosen gear does sound better at higher
sample rates, by all means of course use it this way ;)
>From a theory point of view, sampling at higher bitrates makes absolutely no
difference.
The Nyquist-Shannon sampling theorem in clear math states that a signal can be
accurately reproduced at half the sampling frequency - provided infinitely
steep anti-aliasing filters are employed.
See http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem
Older designs using analog filters often had issues with such filters being unable to completely cut off aliasing products.
For more than a decade, modern AD/DA converters has had build-in digital antialiasing filters which, alongside with oversampling, results in very steep slopes, making it fairly easy to reproduce 20Khz from 44.1Khz sampling.
It is interesting to note that (64 to 256 times) oversampling converters actually up- and downsamples to the wanted sample rate.
In other words, although we may choose a certain sampling rate with our interface, the converter internally samples at a much higher rate, and then converts this to our desired rate.
Sampling word width:
We get approx 6.06 dB per bit, or in practical terms 6dB/bit.
So 16 bit gives us a dynamic range of 96 dB (which can be improved using aggressive noise shaping).
Most practical music gear, unless being quite pro, will not have this dynamic
range, and much less a signal/noise ration even approaching 96dB.
My RME quadmic mic preamp has 120dB range, but the rest of my gear
doesn't, and building a project studio with a noise floor at even -80dB is a
dire and expensive task.
So 16 bit would suffice just fine for the raw recordings.
However, it's a different issue once we start to work on those samples.
Every time we add two equally sized full-scale samples, the result will be twice as large; an increase of 6 dB. Expressed in bits:
Adding two full 24 bit signals will result in a 25 bit signal (1 bit = 6 dB).
Using EQ and myriads of effects will all make the resulting signal grow.
Even if we keep the signal normalized on the DAW, it still grows - it
just grows downwards, that is, it'll have more and more lower bits.
We cannot simply truncate or ignore those lower bits, as they all contains
information about the signal.
When we at some point convert to the final 44.1/16 format, all bits in this
large-bit signal are used as part of the conversion and dithering process.
A main reason why the DAW world has been moving from 24 bit to 32 bit, and resently to 64 bit processing: To avoid loosing any 'bit' of information during the process, and only at the final stage convert to 44.1/16.
I said above that 16 bits would work just fine for recording..
While this is true, the recording engineer would have to fairly accurately
place the audio fairly close to full-scale, and overshoots could easily occur
(like the sax bloving extra hard, or someone spitting in the mic).
Using 20 or 24 bits makes the recording process somewhat easier ;)
Succeding processing will work jsut fine on those 16/20/24 bit samples in 32 or 64 bit formats used internally in the DAW.
Reyn Ouwehand wrote:If your material has to end on a CD, record on 88.2kHz. If it's for film
record on 96kHz. Due to the math..
I assume you mean that an 88.2 Khz sample only needs to downscaled by a factor
two to arrive at 44.1 Khz..
Which isn't so. The 88.2 signal will need to be calculated and dithered
exactly the same way as with any other bitrate.
Rick Walker wrote:On 7/22/64 11:59 AM, David Gans wrote:
I record all my gigs at 96-24. Better to archive, and better to produce, in high-res and downsample at the last stage.
What I've always wondered:
Does a recording at 96-24 downsampled to 44-16
sound better than a recording sampled only at 44-16, initially.
And if so, what's the logic?
Daniel Thomas once explained the answer to me a long time ago, but I have completely forgotten what
he told me. Please forgive the brain fart.
Rick Walker
--
rgds,
van Sinn
--