It doesn’t matter whether you’re just a mixing engineer or an independent music producer trying to get artists to buy beats from you, In both of these professions it’s an absolute fundamental that your final product is dynamically sound in order for it to be successful, this is where a solid understanding of audio compression will help you.
Compression is an oddly misunderstood tool in digital music production, and yet it is one of the two most essential processes you can apply to give your tracks that professional “full” sound. The other tool, of course, is EQ, knowledge of which is vital to effectively mastering the art of compression.
Now, let’s lift the lid on this mysterious “compression” concept and get your tracks sounding loud and full.
I want this to be as easy as possible, so let’s call this guide Audio Compression for Dummies, (I’m not calling you a dummy, it’s just a name relax)
Grasping the concept
I have seen many questions pop up in production forums time and time again: what is the difference between EQ and compression?
Here’s the simple answer:
EQ deals with frequency, compression handles volume…
The confusion tends to stem from the fact that playing with frequencies can impact volume, which is why it’s important to have a solid grasp of EQ before trying to get your head around compression.
Audio is represented by waves, and the height of the waves correlates to their loudness. The red lines (which we have referred to as the lid) represent the maximum amplitude the wave can reach before it starts to suffer. Go beyond the red lines and the sound will start to lose definition and start producing horrible side effects such as high pitched feedback and ugly distortion. This process is sometimes called “redlining” in the industry and is a VERY BAD THING.
So, to get the optimum level of loudness we want to get our waveform’s height up so that it’s as close as possible to the lid without redlining.
Now, consider this waveform:
Here, the highest parts of the waveform (the ones touching the lid) are very narrow compared to the bulk of the waveform, meaning that the loudest parts are very short sections of sound. These “peaks” are often the result of particularly sharp individual hits such as a snare, vocal plosive (the popping noise you get when you record a “p” type sound into a microphone), or similar.
On a complete track, they could be the result of EQ anomalies in specific places. Whatever the reason, the end result is that the average volume of the waveform is pushed down (away from the lid) by those narrow peaks, as demonstrated by the yellow overlay here:
With the average volume sitting so far below the lid, the waveform will sound quite quiet overall to the human ear. To make it sound nicely loud, we need the rest of the track to be sitting closer to the lid where the peaks currently are. In other words, the average volume needs to be louder.
Now, we could just raise the volume of the whole waveform, but that would immediately push the peaks outside of the lid and create some nasty distortion. So the only option we have is to reduce the volume of those peaks so that they are more in line with the average volume, and then raise the volume of the entire waveform. This, in a nutshell, is what compression does. Here is a simplified example to illustrate:
With that explanation, you should hopefully see why it’s called compression, as on the most basic level it’s about squashing (compressing) the loud peaks on a waveform to make the whole thing more even. It’s the same as a human being moving a volume fader down during the volume peaks and then back up again during the quieter parts but done automatically and much faster than a human hand can manage. An automatic volume control: that’s basically, what compression is.
How To Use Compression Wisely
Now before you go rushing off to squash the peaks on your quiet-sounding tracks, sit down for a second and read on a bit further. With great power comes great responsibility, and applying compression willy-nilly will actually do way more harm than good. Knowing when to use compression is just as important as knowing how to use it.
At this point, we need to talk about dynamics, as a compressor is, after all, a dynamics processor.
But what does that mean?
Dynamics in music quite simply refers to the relative loudness of a note in a musical performance/track.
Please note the word relative: dynamics are not referring to absolute values, and this is important to understand when making your compression decisions.
It’s easier to explain by referring to its original context of classical music annotation. When composers write musical scores, they often want particular notes to be louder or softer than the rest of the piece, to emphasize a dramatic point in the music for example.
Since humans cannot accurately control the real volume of the notes they are playing (try telling a violinist to knock out a concerto at exactly 70dB), the composer can only direct the musician to play a note louder or more softly than the note they are currently playing. For this, composers annotate their compositions with terms such as “forte” (strong), “piano” (softly), and various levels between and around such as “fortissimo” (very strong) and “mezzo-piano” (semi softly). Of course, it is up to the musician to interpret these instructions and play the note accordingly.
So in broader terms, dynamics refers to the natural fluctuations in volume between individual notes in a piece of music.
Why is this important? Let’s take the example of a piano recording, such as Rachmaninoff/Mussorgsky’s “Hopak“, which is a solo piano piece. In waveform it looks something like this:
Lots of spikes and dynamic fluctuations – so, should it be compressed? for the sake of illustrating this point, no it shouldn’t (in reality you would compress it, but subtly). If you record a MIDI piano performance into a sequencer and then adjust all the individual notes so that they are exactly the same velocity and volume, the result will sound very flat, unnatural and lifeless. If you try and flatten out the above waveform too much, the loss of dynamic range will make it sound – you guessed it – flat, unnatural and lifeless. The track is supposed to be quiet in some places and loud in others.
Listen to It here…
There’s a reason why dynamics are important to us as human beings, volumes made by individual sounds (though I should correct the values as human conversation is actually around 60dB, not 20dB as mentioned in that article… but you get the gist), and how many of these sounds the human ear can actually hear.
In reality, the human ear can hear sounds that are as quiet as just above 0dB, all the way up to 140dB, though anything after 130dB will cause physical pain if you hear it, and anything above 140dB will cause deafness. Which is interesting considering the average rock concert pumps out at 150dB at its loudest, but I digress… the point is that with such a huge range of volumes audible to us (in technical terms this is called the dynamic range of the human ear), and hearing being like any of the other human senses in that it likes to be stimulated as much as possible, a tune with hardly any dynamic range will sound incredibly dull, a bit like eating a dry piece of toast with no butter or anything on it.
Sometimes there are better alternatives to compression to solve your volume issues, from simple volume envelopes to EQ adjustments. A drum loop in which the snare has too much high end could cause unwanted peaks and drive the overall volume of the drums down, but in this case a simple EQ tweak to take out the high-end frequency would drag the peaks down without adversely affecting the dynamics. Conversely, using heavy compression on a drum loop instead of EQ can easily ruin the overall sound, as drums are short samples where a short range of frequencies impacts the overall sound; compression could ruin the “snap” on a snare, for instance, making it sound woolly or – worse – creating an unpleasant “sucking” sound. Try it and you’ll see what I mean.
Finally, applying compression to a finished track should be approached with caution, as at this stage we are dealing with a much more complex range of dynamics than individual parts, so the margin for error is greatly reduced. You should be thinking about compression on each individual part (or instrument) in your track, and a final stage compression should only be used on a very subtle setting to smooth out the last remaining bumps, or you’ll risk messing up the dynamics too much.
There are no hard and fast rules for what should be compressed and what shouldn’t. It really just depends on what sounds good. I could say, for example, that you should add compression to all your vocals and basslines, but only because vocals and basslines tend to sound better when compressed. In some cases, they may sound better uncompressed though, so it’s important to remember to use your ear to make decisions, and always consider the dynamics of a particular recording in relation to the rest of the track.
Introducing The Compressor
So you’ve decided that you need to apply some compression to your recording. For beginners, the first question is probably “which compressor should I use?”. Unfortunately, that’s a huge discussion all of its own, as each compressor has a different character and many have unique settings that allow you to do advanced funky things with your tracks – since compression can fundamentally alter the sound of a recording, it can also be used creatively to deliberately alter dynamics, for example. But we’re not going to get into that; for the purposes of this tutorial, we’re going to be discussing the 6 most common settings found on compressors, and we will be using the Fabfilter Pro-C to illustrate because it’s one of the best out there in terms of both quality and ease of use.
So, without further ado, here are the main settings you need to know about:
Moving from left to right in the screenshot this is what each dial does:
- Input: the volume of the audio going into the compressor. If left at 0dB, it will use the original volume of your audio recording; tweaking the dial here will either boost or lower the volume of your original signal. Generally speaking, you won’t need to use this unless you have a particularly quiet or loud signal in the first place.
- Threshold: this is the volume, in dB, at which the compressor will start to kick in. So, if you set the threshold to -15dB, the compressor will do nothing until the volume of the audio going into it hits that level, at which point it will start automatically compressing. Think of it as a sensitivity setting if you like, in that the more you turn this dial the more compression is going to happen.
- Ratio: this is basically a mathematical representation of how much the volume will be reduced by the time it gets to the output. A ratio of 4:1 means that if the audio increases 4dB above the threshold level, the excess output will be reduced to 1dB, 8dB would be compressed to 2dB, and so on. If you’re really terrible at the basic division, this will be very confusing to you, in which case all you need to know is that the higher the ratio, the more extreme the compression will be. It’s also worth noting that ratios above 10:1 or so are generally defined as limiters. That’s right, the only difference between a compressor and a limiter is that a limiter has a high ratio and short attack time. Whack the ratio on your regular compressor up to 10:1 or more and bingo, your compressor is now a limiter. Magic!
- Attack: much like the attack setting on a synth, this setting determines how quickly the compression happens. At the lowest attack setting, the sound will be compressed almost instantaneously, whereas a higher value will delay the compression. This is useful if you have a snare sample for example, which has a sharp sound at the beginning of the sample. A low attack setting would squash that snap and make the snare sound all squishy and rubbish, so delaying the compression slightly by increasing the attack will ensure that only the latter part of the snare sound is compressed.
- Release: if you understand the attack, you understand release, as basically it determines how long the compression will linger on your sample after the sound drops below the threshold level again. If you set the release very low, the compression will quit almost instantly, which can create a very unnatural “pumping” effect as sounds in real life don’t tend to change volume instantaneously. Conversely, leaving this setting too high can create a situation where the compressor is not given enough time to recover before the next sound above the threshold arrives at its input.
- Output (sometimes labeled “Makeup Gain”): once your audio has gone through all the compression parameters, the output is the end result and this dial, like the input dial, merely adjusts the gain of the signal at this point. Unlike the input dial, you will be using this on a regular basis to raise the overall volume of the audio after the compression has created extra headroom. The output dial is, essentially, how you achieve your extra loudness. Often this can be set to auto so that the new headroom is filled automatically.
- Knee: this has two settings – hard or soft – and relates to how gently you want the compression to happen around the threshold point. A soft knee will make the whole compression process happen more gently by starting to compress slightly before the signal reaches the threshold, but at a lower ratio value which gradually increases to the full amount as the signal rises above the threshold. A soft knee is useful for creating a more natural sound, such as in acoustic styles of music where naturalism is important. A hard knee, on the other hand, means that the full ratio amount you have set will be applied to a sound as soon as it reaches the threshold, which is often appropriate for things like drums in electronic music.
Most of you will be using compressors that use a line on a graph instead of the rack format above. In your case, it will look something like this:
This is basically a visual representation of what a rack compressor does: the thick black line is the audio, the thinner vertical line through the middle is the threshold, the point at which the audio “bends” is the knee (in this graph, a hard knee – a soft knee would look more like a smooth curve), and the point at which the audio line touches the right-hand side relates to the ratio. In the graph above, the dotted line would represent a ratio of 1:1. If the knee was at a 90 degree (horizontal) angle, that would represent a ratio of infinity:1, so the closer the right-hand side of the audio line is to the top right-hand corner, the lower the ratio. Here’s another graph to illustrate this:
It all makes sense now, doesn’t it?
Now you know what all the various settings do, I’m sure you’re itching to go and try it out on your own tracks. By all means do so, as the best way to learn compression is to experiment and find out exactly how all these things sound. If you’re still lacking in confidence, then we have put together a video for you which summarizes the above lessons and shows you how to use a compressor to make a Reese bassline sound more even.
So you see, compression isn’t such a mystery after all. There are a few other things you may come across as you dive deeper into the world of compression, such as noise gates, RMS, sidechaining, and so on, but I’ll leave you to discover those for yourself once you’ve got to grips with the fundamentals. As with all things in music production, the key is practice, practice, and more practice.
Just have fun while you’re doing it.