The Complete Guide To EQ

I started making music in Reason over 10+ years ago, I would happily sit there thinking of myself as the next Dr. Dre nodding away adding sound after sound, and effect after effect, piling phaser’s on top of delays, filters on top of distortion and so on, the main problem was is that my tracks sounded weak, a lot of people would say that they sounded very unprofessional and none of the elements really fit together well.

Put it this way, I was a long way off (and still am) becoming the next Dre.

Slowly I came to realise the problem. I was not really using, or even being aware of, any EQ or compression. After all, they were both a bit scary, I didn’t really understand them, and besides – they were boring, compared to my funky range of drastic FX processing. Silly me. I should have realised that…

EQ and compression are simply the two most important tools in music production.

Think of it this way. You are building a house. Exciting things like flangers and filter-delays are like the designer purple wallpaper or expensive thick-pile carpet. They make your house look welcoming, or fashionable, or comfortable to live in. EQ and compression on the other hand are making sure the bricks are mortared together and the walls are strong enough to support a roof. And at the end of the day, sure, your designer wallpaper may be lovely, but if the kitchen has collapsed into rubble and the bedroom door is only three inches wide, your house won’t be much cop.

My mistake – and, I think, the mistake made by many learning producers – was to be tempted by the more exciting task of choosing the colour schemes and leather sofas, when my walls could be knocked down by a sneeze and my roof was made of paper. Get the fundamentals sorted first! Otherwise the frilly stuff actually just gets in your way and makes it harder to even work out what your problems are!

Hence this tutorial.

Now I could simply do a lightning-quick tutorial on compression – I could say, for example, “for basslines it’s best to compress at a ratio of 3:1, threshold -6db” or whatever (that’s a totally fictional example by the way). But this is only so much use. Of course, I encountered a lot of advice and information about EQ and compression while I learned. But I know that I never began feeling truly confident in such engineering matters, never really felt I understood any of it, until I put all the pieces together, joined the dots, and worked out that these 2 subjects, and so much more, are all deeply inter-related. So, it is my ambitious aim to map out this whole territory. I present not a strictly practical tutorial, but rather a way of thinking, which I personally found led to a greater understanding, which in turn had many practical benefits.

My central concept is that producers in the digital domain are effectively working inside a box. In this tutorial, I will define the box, explore a few fundamental concepts, and highlight some of the limitations of digital audio.

The Box

Anyone remember Rimmer’s classic speech about life being like exams? I don’t remember it exactly, but the gist was this… “Some people are like an English exam, where they start at 0, and everything good they write gains them a mark. Others are like French exams, where they start at 100, and lose a mark for every mistake they make”. Well, in an extremely tenuous way, that’s like the difference between sound as it occurs in the real world (English exam), and digital audio (French exam).

Let me explain.

In the real world, what is silence?

Now, I don’t know the actual numbers, and can’t be bothered to research them, but bear with me here… Talking is a bit louder than silence – 20dB maybe. Roadworks machinery, much louder than talking, maybe as loud as 80dB. A jumbo jet taking off, or an enormous sound system for a stadium rock concert, may top 100dB. The louder the sound, the bigger the number. Is there a maximum? Well, I dare say there is a maximum as governed by the rules of physics, I’m not really sure. But as far as you’re concerned – not really. Consider the audio as represented by waves (which is what it is, traveling through the air). The height of the wave is the amplitude (ie, loudness).

In the real world, if a jumbo jet is louder than a car, that’s because its wave is higher. Simple.

Digital audio is different. In this case, 0db is not silence – it is instead our clear and unarguable maximum. No sound can be louder than 0db. This, then, is the “lid” of our box. You cannot make a wave taller than the box allows. Say you have a “car” sound which is so “high” it is touching the lid of the box, and you want to add a “jet” which is louder – well, you cannot just make a wave which is taller. It simply will not fit in the box.

The Complete Guide To EQ 2

So we have our first dimension of the box, the amplitude (volume). This lid is an absolute limit we have to work within – we cannot store a wave any louder than 0db. The width of the box is simple. Presuming that you aren’t producing in 5.1 surround, of course, the width is the stereo field – one side of the box being fully left, one side of the box being fully right. Again, an absolute limit – you can’t go more left than having no signal in the right whatsoever.

That leaves us with the third dimension of the box, which is frequency. Now if you really have no idea of what frequency is, we could be in trouble. Try googling, this article would turn into a novel if I set about explaining things like that. Once again, we are met with absolute limits. These are dictated by our digital audio settings: if we are sampling at 44.1khz (CD quality), then by the Nyquist theory we can store no higher frequency than 22.05khz. In practice, with filtering, it works out at around 20khz, which happens to be the approximate frequency at which human hearing runs out anyway. So we’ll take that as our upper limit.

The lower limit is, mathematically speaking, whatever the smallest number above 0 you can store (0.0000001 or whatever). Again, in practice, this is irrelevant since below 10hz we are seriously sub-audible. General human hearing has pretty much lost it by 20hz, so we’ll take that as our lower limit.

As a producer, your track will have to fit entirely within this box. Thus, not only will each individual sound also have to fit within the box, but all the sounds put together must all fit within the box. This is important. In fact, this whole paragraph is important. Read it twice. You need to imagine this virtual box of digital audio as a physical box. Say, a shoebox. As you produce, imagine your sounds as objects being placed inside that box. There is only room for so many of them. What shape they each are will determine how nicely you can pack the box. Whether or not some of them are fragile and important may affect how you arrange the various objects together (placing a china vase on top of the t-shirt, not underneath the breeze block).

Imagine the act of finishing your track and playing it to others as gift-wrapping the box and handing it to the person as a present. If the box is nearly empty, they’re unlikely to be as impressed as if it is packed jam-full of goodies. And they’ll be very disappointed if they open it and discover the china vase has been smashed to pieces.

Now at this point, I wouldn’t blame you for seeing this crazy metaphor and thinking I have lost the plot altogether. However, bear with me as things should start making a bit more sense as I take this metaphor and demonstrate it within more practical terms. First up let me explain the previous paragraph.

  • The ‘fullness’ of the box is equivalent to the overall volume of your track. And I do not merely mean volume in a simple objective way, but the overall feeling of “fatness”, “weight” and “punch”. Also, this is the “fullness” of the box in every dimension – it’s no good filling it to the top, but only in one corner, or filling it end to end, but only occupying half the depth. If you’re a new(ish) producer, you have probably noticed that when you play your own track followed directly by, say, Stakka and Skynet, yours is simply quieter, and certainly not as fat. That is because Stakka and Skynet (and their mastering engineers) have managed to cram the box fuller than you did.
  • The china vase relates to the next problem. Anyone, in crude terms, can jam a helluva lot of stuff into a box, but can they do it without breaking it? Imagine the china vase as your central melody. This is crucial (this is the best and most expensive part of your “present” to your friend). Unfortunately, you put a breeze block on top of, (totally obscured it with a large, but not valuable or important sound), and smashed it (your central melody is no longer properly audible). Just as your friend would be disappointed with a smashed gift, so they would be unimpressed by a track whose central melody was inaudible.

OK. Hopefully, we have a rough idea of the box we’re working within. There are three dimensions: stereo width, frequency, and dynamics. I’ll take a quick peek at stereo issues at the end, but first, let’s look at the practical matter of how EQ (and compression) can help us pack that box really nicely.

Think Visually

The simplest way of explaining frequency is that it is the technical term for pitch. The “A” above middle C (or is it below, I can never remember), for example, is 440hz. However, the first key point to establish is that sounds, in practical terms, do not have a single frequency, they span a whole range of frequencies. Any instrument will produce not just 440hz but a wide range of frequencies at various different volumes, with our overall impression of their “sound” reflecting this complex output. How do we tell the difference between a violin playing an A and a trumpet playing an A? By their “timbre”, or the overall quality and properties of the sound. Mathematically, this equates to the overall “shape” of the frequencies produced.

An array of splodges

Time now to reveal a great secret, a killer tip which will revolutionize how you produce and make you successful overnight.

Dream on! There is no such thing. On the other hand, this idea works for me, and it may just be some help to you. Here goes: when producing, and especially when mixing, constantly visualize the elements of your track as various different colored splodges on the same graph. Remember – all your sounds are fitting in the same box. You only have one frequency spectrum to fill (or otherwise) with noises.

Bear in mind that this is greatly simplified, but hopefully, you get the idea. Now, for the first important point based on this. The powerful, punchy, LOUD professional-sounding dance track fills the box in all dimensions, including frequency. This means that, as in my diagram above, some splodges are coming up to pretty much max volume (0db) all the way from end to end of the spectrum. It also means, crucially, that no holes are left along the way.

What is the problem we can identify here? There are many, let’s work across from left to right.

  • There are no subs (the track will feel bass-light, not heavy, “deep” or “warm” enough).
  • The bassline is too thin. It does not extend far enough to the right – meaning it lacks power in the treble regions. This will equate to a lack of “presence” or “bite”.
  • The kick is too wide and overpowers the bassline by sitting in practically the same place.
  • The pad extends too far left, interfering with the kick drum. This will equate to muddiness.
  • There is a “hole” between the lower elements and the mid/upper elements (in this case, shown between the bassline and snare). Such holes will sound pretty much exactly that – a hole or gap in your track. Your mix will feel incomplete and lack power and fatness.
  • The snare is too thin. It contains only mid/upper-mid/treble frequencies and no lower-mid frequencies at all. This will equate to a weak, “tinny” snare.
  • The synths are also too thin, leaving another small hole in the response.
  • The hats and cymbals tail off too early, leaving a lack of anything at the very far right of the spectrum. This will equate to a lack of “sparkle” or “air” in your track, or in extreme cases, a mix that sounds flat and dull, like it is coming from under a pillow.

See how useful this stuff is? Hopefully so. Well, by now you are probably itching to learn how it is that EQ plugins (or indeed outboard) will magically allow you to fix all this. But the truth is that EQ plugins are not what fixes it – what really helps you out is thinking this way. All the time.

Layering sounds

To prove it, I shall now explain how thinking in this fashion can help fix the badly made track described above, without even touching an EQ plugin. Let’s take the snare. How would you describe the timbre of the ideal drum & bass snare? Well, it’s got to be hard, it’s got to be smacking. Remembering that timbre equates to the shape of our splodge, this our first clue as to what shape of splodge we’re looking for. Given that a sound will generally sound bigger and louder whenever any one of its dimensions inside the box is increased, we will need a fairly wide splodge, covering a large range of frequencies, in order to produce a sound that will really smack the listener’s head up. The ideal snare should have crispness or “snap” to it – this translates to healthy response in the upper-mid regions, and it should also have some “weight” or “beef” to its “thunk” – this equates to a healthy response in the lower-mid regions.

Now, I don’t want to go overboard giving out actual frequencies, because I fear people may take them as magically “true” values, when the truth is that every sound must be evaluated individually within the context of each unique track. But to give you a general idea of what I’m talking about, I find the “thunk” is usually around 200-400hz. I usually get a good “crack” around 2-3khz, whereas general sparkle and crispness can be found all the way up to 7-8khz. I repeat: your mileage may vary.

Imagine that the producer of this not-very-good track is not just smoking crack. This isn’t an awful, awful snare sample. In fact, let’s imagine it’s quite good. It has a rather decent “snap” to it. It is just too thin and tinny, lacking any weight. In the real world, this is entirely unsurprising. Very few samples will be sufficiently larger than life to become our dream snare in one go. Instead, we lookout for another snare which is the opposite of the one we have. A snare which may lack all the good points of our first selection, but that doesn’t matter as our first selection has them locked down. What matters is that our second snare succeeds where the first fails. In this case, it provides a nice beefy lower thunk. Add the two together, and we are in business.

In practice, it may require more than two samples, but that is the general idea. Adding multiple sounds together can produce a “fatter” result than a single sound, and is best done by choosing different sounds that complement each other, each possessing desirable characteristics in different regions of the frequency spectrum.

Now we shall look at some of the possible problems that come from layering sounds together.

Adding sounds together

A common mistake made by beginners is to try and gain more fatness by layering the same sound on top of itself, or layering sounds which are rather similar. This is nothing but trouble – it will not make your sound any louder in the ear of the listener, it may cause phasing problems, and it will waste your headroom. Let’s see why this is.

Remember I said it’s important that most splodges are wider than you might imagine? To see why this matters, we go back to the box. Your sounds are physical objects being placed into a physical box, and there is only so much room for them. This analogy can be carried further, for there is only so much room at each given point in the box for objects to be placed on top of each other before the stack becomes too high for the lid. It’s almost like Tetris. Imagine you have laid this arrangement of bricks in your box, and you have another arrangement of bricks next to the box which you want to put into the box in the same formation.

If our box were only tall enough to accommodate a stack of two bricks in height, we would have problems, since we now have a stack that is three bricks high. We cannot fit the lid on. It is the same with digital audio. Just as we found a “peak” was created when we tried to fill the same spot in our box with bricks from two different piles, so we will find sounds and add them together and create a “peak” when you try to fill the same spot along the frequency scale, with content from two different sounds. There are however two key differences. First, we of course do not really have discrete bricks, we just have continuous splodges. Second, it is not possible to simply “leave the lid off”. The lid is absolute. So, what effectively happens instead is that your highest stack of bricks becomes 0db, and everything else becomes proportionately quieter. This is quite important: an EQ (frequency) issue turns out to impact on the dynamics (volume) of the track! I told you everything was closely inter-related. Now, since everyone wants their track to be nice and loud (right?), we’ll investigate this “adding together” problem imminently…

First, however, I should make absolutely clear that sounds “adding to each other” in this fashion is entirely natural. Do not get over-paranoid about separation, and attempt to viciously restrict your splodges from ever overlapping each other altogether. My splodges in the “well-produced” track were badly airbrushed together for a reason – overlap in itself is normal. It is only sometimes problematic.

In the next section, we shall investigate some scenarios where unwanted or unnecessary frequency content causes problems, and how EQ can help.

Cutting out problems

Take, for example, the pad in the badly-produced tune. Here, we had a problem because the pad was extending too far into the low frequencies, and causing muddiness amongst the bassline and kick. Step back and think about your sounds, and what areas of the spectrum they actually need to be in. Does the pad have any business kicking off around 70hz? What purpose is the pad serving? Atmosphere – which is for the ears, not the chest. It is supposed to be light and floaty. Clearly, then, it has no business kicking off around 70hz. And since its low-frequency content is causing problems with our kick and bassline (which have every right to be kicking off in this territory), it’s got to go.

Now imagine our splodge as a block of clay. What we need to do is somehow “sculpt” away at our splodge, carving out all, or at least most, of the stuff on the extreme left-hand side of it. EQ is our basic tool for doing this. It’s worth pointing out here that filters are an equally valid tool, just more overt and with an obvious “character” to them. Consider filters to be akin to taking a knife to your splodge, and slicing a piece off it. In dance music, this is not necessarily a bad thing, and so I regularly use filters for EQing tasks. EQ is for subtler shaping – consider it akin to carefully and smoothly rubbing away clay with your fingers. The easiest way to learn EQs is with graphic EQs since here you can literally draw the kind of shape you want to apply to your sound. Remember the line of your graphic EQ response is what will happen to what you already have. It is not the shape which you will get.

Now you see the pad’s splodge tailing off nicely in the low end, leaving a gap for our bassline and kick to cut through unchallenged. In our hypothetical example, the pad was actually causing an audible problem with the kick. However, this task is well worth doing anyway – even if it isn’t! Remembering our piles of bricks, any low frequency content in the pad (or anything else) would only add onto the heavy presence of those same frequencies provided by the kick and bassline. This would create an overall peak in our volume response, and as we have established, the pad has no need for these frequencies, therefore there is no need for this peak to occur. Rather, the peak only serves to force the rest of our track to be quieter.

There is a phrase for this: wasting headroom. The concept of headroom is a very simple one – it’s the amount of space between your loudest point (ie, the tallest point of your track’s overall splodge, as formed by adding all your splodges together) and the lid of the box, 0db. As I’ve tried to emphasise, this is a finite limit, and all the sounds you use contribute towards reaching this limit. Therefore, any frequencies in any sound whatsoever which do not need to be there are simply wasting headroom, and in doing so, making your track quieter than it needs to be.

In practice, this factor is especially relevant to the sub-bass and bass region. It is incredible how much low-frequency energy some sounds have, even the types of sounds you would not expect to. Many times I have cleaned up a hi-hats place in the mix after discovering a low-cut at 200hz was removing a lot of low-end garbage from the hi-hat. Anything you record from a mic, even if you engage low-cuts on the microphone and/or preamp, will still almost certainly contain some rumble or other garbage below 100hz. Many pads and atmospherics from sample CDs or synth presets also have far more low-frequency depth than is actually appropriate for inclusion within a drum&bass mix, because they are designed to sound impressive on their own (to make you buy them).

I have explained why this is a problem. All the low frequency garbage in these other sounds gets added to the low-frequency content of the kick and bassline, thus increasing the combined volume in the low-frequency region, and eating up headroom (your limited space before the 0db lid). It should be pointed out that (a) this is only such a problem because we are making drum&bass, and (b) we can only fix it the way we do because this is drum&bass. Drum&bass, as the very name suggests, contains a very large of amount of bass. Your bassline needs to be massive. This really doesn’t leave room for anything else to occupy its territory. Going back to the box – if you have to put a brick at the left-hand (low frequency) end of the box, which will totally fill that end, then you simply must squeeze everything else up into the other end of the box. In audio terms, this is our low-cut filter or EQ, as described on the pad above. When making drum&bass it is often a good idea to ritualistically low-cut any sound which does not need low frequencies, just to make sure you have the maximum space available for your bassline.

If you were recording a folk ensemble, you would not have an enormous bassline dominating that end of the spectrum, and there would be room for the sub-100hz resonances of the guitars, violins, vocals and percussion to be preserved. Indeed, as per point (b), were you to ritualistically cut them all you would end up with a very badly produced folk record indeed. Deprived of their low-end harmonics, all the instruments would sound rather thin and cold. It is only because you are filling that region up with a huge bassline that you can get away with it in drum&bass, and even then, this caution still applies. Go too far, and you may make your sounds rather thin. This is part of the judgement call involved in that word “need”. Sometimes sounds need their low-end left in, otherwise they start sounding stupid. Remember – use your ears!

In any case, this low-cut behaviour is only one particular example of the technique of subtractive EQ. I have drawn special attention to it since it is something which crops up rather regularly given our particular subject matter: bass-heavy dance music. But the general technique of removing or reducing unwanted or unimportant parts of a sound’s splodge is one that can (should?) be used across the spectrum, and ultimately your ears will always have to provide the final judgement.

Our array of splodges

Cast your mind back to our vision of a track as an array of different colored splodges on our graph. Let us imagine a beginner producer struggling to achieve a decent mixdown on a tune. I’m sure we’ve all been there – I know I suffer this problem regularly. A sound is far too quiet, you can hardly hear it… So you turn it up, and five minutes later it is far too loud and now you can hardly hear something else! You keep tweaking, but somehow or other, you can never get the mix you want – which is for each sound to be quite clearly audible simultaneously. Instead, the elements just fight each other, refusing to gel together. For the sake of clarity, let us forget about all elements of the track except the three which are giving our hypothetical producer a headache: a pad (blue) and two synth parts (grey and yellow. And yes, that’s how I spell grey, thank you very much!).

When depicted visually like this, it becomes quite clear what the problem is. The three elements are all fighting for domination of a single small area in the frequency spectrum. By doing so, none of them is clearly audible, with the added disaster of a large amplitude peak being created (wasting headroom). Although, as I have said, a degree of overlap is entirely normal, ultimately there is not room for all three to occupy the same spot like this.

What can we do about it? Well, the first obvious step is to separate the two synths. They need to be next to each other, not on top of each other. Since the yellow one is already somewhat lower than the grey one, it makes sense to attempt to pull the yellow one left a bit and push the grey one right a bit. To achieve this we may allow more low-frequencies through on the yellow one (by lowering a hi-pass filter cutoff or reducing any low-cut EQ we have, for example), whilst sculpting away some frequencies from the yellow synths upper end (with our subtractive EQ, as on the previous page). For the grey synth, we do the reverse: roll-off more of its low-end, whilst allowing more upper frequencies to come through (if applicable). In addition, we can reduce the strength of the pad in the frequencies occupied by both the synths, with an EQ notch or two. The pad will still be full strength around them, so we won’t notice a significant change in its timbre – at any rate, it is a background element, so we can afford to twist it around a bit in order to fit the mix rather more than we could if it were a lead element.

As you can see, everything now fits perfectly. The bad news is that this scenario is entirely fictional. Unless you are an incredibly talented and experienced engineer with a wide range of awesome EQ tools at your disposal, your chances of using EQ shaping to take a track where three sounds are seriously clashing in frequencies, and magically making it all lovely, are in practice very slim. But whilst this example was deliberately exaggerated, and EQ alone won’t fix a royally messed-up track, EQ certainly can help to improve matters when used in this way.

Thinking EQ, not using it

Fortunately, there are many other options available to us, options which once again lead me to remind you of the overall importance of this article: that is not that you should become a ninja EQ master who wields ninja EQ and compression skills at every stage of the game – but rather that you should simply think in these terms, and by doing so, perhaps save yourself from even needing them at all.

The problem encountered above, of the two clashing synth parts, is easily avoidable by using a variety of musical means of fixing these EQ clashes. Some of the most important include:

  • Change the octave of one part. Take one of the two clashing parts and simply drop or raise it by an octave.
  • Change the instrument of one part. If a violin clashes with an ebow guitar, perhaps a tenor sax will not? Clashing is a matter of timbre – the guitar and violin are both stringed instruments, both legato, both in a similar register. A sax has a totally different timbre, and a tenor instrument would likely also be in a different register. In drum&bass production terms, this might equate to flipping patches, loading up a different preset or softsynth to play one riff.
  • Restructure the song so the two parts do not happen simultaneously. Various further options become available here. You might have one riff play for a 16-bar phrase, before stopping and letting the focus be taken by the other riff. You might alternate between riffs quickly, every bar or so, perhaps even breaking the riffs up, to form a question-and-answer motif. Or you might even put them in totally different parts of the track – one synth line after the first drop, a different one after the second drop, or whatever.

If these and EQ still leave your mix unsatisfactory, there are a few other technical routes to try and help with clashing sounds:

  • Reverb, to push one sound further “backwards” in the mix
  • Panning – put one left, one right, or one central and fixed, one wide and autopanning, or whatever.
  • Turn one of them down – simple as that (but not, really, honestly, a fix).

If you get this far, and you still can’t sort the mix out, it may be time for the bottom line, which is this: if sounds are inexorably clashing, you’ll have to ditch one of them. Just get rid of it. Don’t fret about it – just set the riff and/or synth settings/sample/etc aside, and use it as a head start on your next track! There’s a point where some sounds just don’t combine, and any amount of work is only going to dig yourself deeper.

Subtractive EQ: Cut Not Boost

You may have noticed we have only been talking about subtractive EQ (that is, sculpting away portions of sound), and not additive EQ. I’m sure you are all already aware that pretty much every EQ under the sun allows you to boost as well as cut, so why have I not discussed that? Because, as a general principle, it is better to cut than boost. I shall explain why with another of my legendarily convoluted metaphors. Remember how subtractive EQ was like scraping clay away from your physical splodge of sound? Well, additive clay is, therefore, like taking a blob of clay from your stash and whacking it onto your “sculpture”. Follow this metaphor through and you begin to see why it is a bad idea. A clay sculpture of, say, a person’s head, which was made from a cast of their head, will in theory be an exact (or pretty damn close) version of their head. If you take a load of extra clay, and try and add it bit by bit into the head, to try and end up with the same head, only twice as large overall… how well would you do? Not very well. Before very long you’d be lucky if the head even looked like a head, let alone recognisably like the person it was cast from. So it is with EQ – when adding, the effects unit has to effectively create “new” signal where there was none originally, and this degrades the quality of the signal. It’s the same reason you can cut a piece of paper smaller but you can’t cut it larger. Sort of.

Did that make any sense? I hope so… Anyway… of course, this isn’t to say additive EQ is wrong. Especially not in drum&bass where there are no rules! You could use extreme additive EQ as a heavy sound-munging tool, for example. It also comes in handy, where, for example, your snare has the snap you want, just not quite enough of it. A nice 2db boost at the sweet spot is a lot easier than adding a whole new layer.

Where additive EQ is definitely discouraged is in situations where subtractive EQ provides an equally worthy alternative. For example, if you have two overlapping sounds, and you won’t sound A to be more dominant than sound B, you could boost those parts of sound A being obscured by part B. Far better, though, to cut those parts of sound B which are obscuring A. Aside from my metaphorical explanation above, there is one simple reason why this better: headroom, again. Yes, any time you make something louder with your EQ, that’s eating into your total headroom, which will ultimately only serve to make your finished track all the quieter. If you can achieve the same result (A dominates B) by removing something from B, then you are not eating up any more headroom, rather you are keeping it available.

Where boosting helps

Here’s a tip from an engineer friend of mine.

  • “How do you get the drumkit to sound good?”
  • “Well, I make it sound as bad as I possibly can, then I do the opposite”.

On first read, it’s funny, but on closer inspection, it is extremely sound advice. You see, the human ear is a hell of a lot better than hearing things which are there than it is at hearing things which are not there. Drumkit toms, being as they are (a) tuned and (b) beaten hard with a mic millimeters from the surface, are notorious for resonating and causing ringing and feedback. Part of the solution is a sharp EQ notch at the resonant frequency – however, when placing a cut on the EQ and then scrolling the frequency, it is sometimes hard to pick out exactly where you need to be. What is a lot easier is adding a huge boost, and then sweeping the frequency. Sooner or later, all hell will break loose, the drumkit will sound utterly atrocious, the mics will be feeding back like there is no tomorrow – and you know you’ve hit it. Just flip the boost to a cut and you’re sorted.

The same technique can be very helpful when producing. If there is something “annoying” about a sound, it is usually quite hard to work out exactly what annoys you about it, let alone what frequency band this annoyance is emanating from. However, if you add a huge EQ boost, then scroll the frequency, you will often stumble on something very annoying indeed. It’s kinda like zooming in on a picture to better spot flaws in the details, I suppose.

There is no magic formula to EQ but there are some basic principles that will help you enormously throughout the music-making process. Remembering these principles at every step of the way is the only secret you need to know – the rest is down to your own experimentation, tweaking, and ultimately your own ear to make your mixes sound great.

George Matthews
George Matthews

With 17 Years music production experience, George Matthews is the CEO of Your Local Musician, he also makes music under the name Grimmm and releases Lo-fi music.

We will be happy to hear your thoughts

Leave a reply

Protected by Spam Master


Your Local Musician
Logo
Register New Account
Shopping cart