Sample rate is not a major variable in determining the sound quality of a recording. However, it is an important part of the recording process and it can have an affect on the final product.
Although to be fair, if your sample rate isn’t high enough (at least 40 kHz), it can actually have a significant impact on the quality of your recording. But any rate above that threshold is unlikely change your sound quality in any significant way.
The Key Points Where Recording Quality Is Affected
I consider sample rate to be a mostly inconsequential piece in determining recording quality. So I want to quickly walk through the items that I consider to be most important.
Usually the most important parts of the process are when there is some change to the sound/signal that’s making it’s way to your recording software.
1. Sound to electrical signal conversion
The first time there’s a significant change to your audio is when the acoustic signal (singing voice or instrument playing) is converted from a pressure wave in the air to an electrical signal.
This process often happens through way of a microphone.
The standard path is for the sound pressure waves to enter the microphone and push on a magnet that’s inside.
The magnet is surrounded by wiring and when it moves (from the pressure waves crashing against it) it creates and electrical signal in the wiring.
That’s the conversion to an electrical signal. And that transition from pressure wave to electrical signal has the opportunity to lose or distort the raw sound you’re recording.
The quality of your microphone can have a major impact on how many details of your raw audio are represented in that electrical signal.
Without a great initial signal, you have no hope of having a great recording, regardless of how great the rest of your equipment is.
2. Electrical signal boosting
That initial electrical signal is typically weak. Most audio interfaces these days have preamps built in to their inputs.
The preamp’s job is to make that electrical signal stronger without changing or distorting it.
Sometimes a preamp is intended to “color” the source signal, but in general this should not be the case for an audio interface. You may want a preamp to add some character to your guitar/vocals when performing live, but in general that’s not desired for home recording.
So the quality of the preamp in your audio interface can also play a major role in your final recording quality.
If the preamp adds static or unwanted distortion to your signal, you’re again out of luck. You won’t be able to correct a muddy signal after the fact.
3. Digital conversion process (or the accuracy of sampling)
The last major conversion happens when your audio interface changes the boosted electrical signal into 1’s and 0’s for the computer to utilize.
There is a hardware component and a software component to this process and both are areas where unwanted noise can creep into your recording.
If the instrument that’s measuring the electrical signal isn’t calibrated correctly or sensitive enough, then the final software conversion will be using bad electricity readings to create it’s computer file.
And if the instrument is perfect, but the software’s conversion algorithm is subject to errors and inaccuracies then you wind up with an imperfect file as well.
Once you’re computer has a saved audio file, the rest is mostly up to you.
4. Mixing and mastering
The mixing process obviously has a big impact on your final product. But unlike the previous three steps, you can always correct mistakes in mixing and mastering after the fact.
No amount of editing can turn a bad recording file into a good one, but a good recording can always be coaxed back into working order after a bad mixing artist has touched it.
In general, if you can get steps 1, 2 and 3 done well, then you can always correct mistakes that happen afterwards.
Understanding Sampling Rate
In the 3rd item above, I mentioned a process called digital conversion. This is the process of converting an electrical signal into digital data.
In order to get to the role sampling rate plays in this process I want to start with an analogy.
Comparing video to audio
Most computer literate people understand the concept of pixels in creating images on a computer screen. Computers show images by displaying a huge grid of tiny pixels with different colors.
The pixels are far too small for the human eye to discern between them and the result is that we interpret the dots as a picture.
In order to display the pixel grid, the computer needs information about the number of pixels, their location and their color.
In video, computers take a series of snapshots (just like a photo/picture pixel grid) and display them out on the screen in quick succession.
In order to play the video, the computer needs a series of snapshots (pixel grid info) that it puts out on the screen rapidly. Again, our eyes can’t tell that there are distinct photos on the screen and we interpret the supersonic slide show as a video.
So when a video camera is creating a video file. It’s more or less creating a series of image snapshots to be played later on the computer screen
Similarly, when an audio interface is recording, it is taking snapshots of the (boosted) electric signal.
Sample rate is NOT a description of how precise or accurate those snapshots are, which is what has a major impact on the quality of your recording.
Sample rate is how often a snapshot is taken.
Sampling rate is measured in Hertz, which means “per second.”
A common sampling rate is 44,100 Hertz (or 44.1 kHz), which means that 44,100 snapshots of the electrical signal are taken each and every second.
Taking 44,100 snapshots of your audio with a bad digital conversion process is like taking 44,100 blurry pictures.
Sampling Rate and Sound Quality
OK now we’re finally ready to talk about how sampling rate and sound quality are connected. I’ll make a statement and then defend it.
Any sampling rate above 40,000 Hz (40 kHz) will have no noticeable effect on the quality of your recording.
This is a theoretically true statement.
The range of frequencies that are audible to humans is 20 Hz to 20,000 Hz. Also, the number of samples per second required to accurately capture any sound, without losing any information, is double its frequency (according to the Nyquist Theorem).
If you’re sampling rate were say only 10,000 Hz, then you’d only be able to effectively record sounds up to 5,000 Hz, which could have a major impact on the quality of your recording.
But as long as you use a sample rate of 40,000 Hz, it is enough to accurately record any sound of 20,000 Hz or less, or any sound within the human range of hearing.
Now there are sound engineers and home recording artists that have given some reasons for using sampling rates more than double the 40,000 Hz.
Situations where sample rates > 40 kHz might matter
I’ll be honest, I’ve not personally experimented much with sampling rates. I’ve pretty much always just stuck to the standard 44.1 kHz rate.
There are a lot of engineers out there that say higher sampling rates can make a difference when your recording signal is being manipulated with various plugins.
If you find that any of the plugins you use are using sample rates above 44.1 kHz then you may very well notice improved performance using a higher sample rate in your recordings.
But here’s why I pretty much stick to the standard:
- Higher sample rates use more computer resources and increase chance of my computer getting overloaded/freezing
- Higher sample rate recordings require more storage space on your computer.
- I mostly use plugins on MIDI recordings, that don’t care about sample rate.
So I’ll leave it to you to decide where draw the line when deciding on a sample rate for your recordings.
But remember this.
The most important parts of creating a great recording are all the spots where the signal changes in some way. The transition of raw sound to electrical signal, the boosting of that signal, and the conversion of that signal into digital data are all FAR more important than sample rate in determining recording quality.