I’m not a professional mix engineer. However, I see so many articles of the “Five Tips to Improve Your Mixes” type that are just filled with bad advice (or at the very least poorly worded advice) that I sometimes feel like the last sane adult out there. So much reliance on processing, and so little attention paid to the integrity of the recorded performance.
So, here are my tips. Or perhaps it’s more accurate to say, this is the stuff I pay attention to when mixing. But first, a disclaimer: I’m only talking about rock, indie and acoustic music mixes, here; I don’t do EDM or pop productions, and little of what I have to say would be relevant if those are the fields you’re working in. If you’re working with acoustic instruments, though, maybe I have something useful to teach.
The key to mixing an arrangement involving vocals, drums and a bass instrument – that is, almost all rock, indie and pop music – lies in the relationship between the lead vocal, the kick drum, the snare drum and the bass. These instruments and sound sources constitute the spine of your mix, the trunk of the tree.
For backbeat-oriented music, it’s standard practice to mix the drums so the kick and snare have equal weight within the aggregate mix. This doesn’t just mean putting the faders for both at unity and leaving it at that. We’re concerned with their level within the drum mix as a whole. All your sound sources – stereo mikes on the kit (usually overheads, but not always), room mikes, close tom and cymbal mikes – contribute to the overall sound. So, when balancing snare and kick, the relative volume of the snare compared to the kick within all the other drum track will be a factor (if you’re using spaced overheads, typically the snare is prominent and the kick, while present, is more distant and clicky). Pay less attention to the peak level of the transient and more to the felt volume of the meat of the drum. And don’t compress those transients into nothingness – those transients provide energy and excitement. Try letting a little transient through – even setting your compressor’s attack to 2.5ms rather than 0ms will make a difference.
Whether the kick or the bass occupies the perceived lowest portion of the frequency spectrum will depend on the material and what the bassist is doing. If the song features the bass being played mainly in the second octave, the fundamental of the kick drum will live below the bass’s centre of energy, but if the bassist is playing first-octave stuff and bass andkick drum are competing with each other, try rolling off the kick’s low end a little and emphasise the beater (more of that later) to give the kick more clarity and audibility.
I like to think of the vocal as sitting on a platform created by the kick and snare drums. Mix it too loud and the voice seems to float above the music. To check you’ve got the balance about right, here’s a hack that actually works: slowly turn the master volume down until the music is only just audible. If the last things you can hear are the vocal and the snare drum, that’s usually a good sign.
A lot of rock records have the vocals sunken a little further in the mix (an aesthetic that goes back at least as far as the Rolling Stones). If that’s your thing, make sure the vocal is still legible. You can drop it a long way back (e.g. the Police, early R.E.M., Dire Straits, etc.), but don’t bury the vocal entirely, unless the band’s aesthetic really is to have the vocal is a texture (as in much of My Bloody Valentine’s work, for example)
Balance – panning
They used to call recording engineers “balance engineers”, and the term is an instructive one. Achieving a balance between all the elements in the mix on a second-by-second basis is what mixing is.
That means getting the relative volume levels right, of course, but it also means placing the elements within the stereo field to acheive a pleasing spatial balance. We’ve already discussed the relationship between the kick, snare, bass and vocal. These elements are almost invariably centre panned, and have been since the late 1960s. But what to do with harmonic instruments? Where do they go?
It’s going to depend a lot on what has been recorded for the production, as well as the panning scheme you favour as a mix engineer.
I’m a proponent of LCR panning, meaning elements are panned 100% left, 100% right or centre (except close tom mikes, which I pan to the places that the toms appear in the stereo image). Panning this way means that the instruments retain their relative positions in the stereo field wherever you may be standing in relation to the speakers; a guitar panned 18% left will be perceived as 18% left only as long as you sit right in the middle of the speakers. Move away from that point, and you change your perception of where all non-centre-panned instruments are.
Now, some mix engineers don’t care about that, and they happily pan elements slightly off centre, or nearly all the way left but not quite. Me, I prefer the clarity and stabililty of LCR. Mixes done that way sound bigger and more pleasing to me.
But LCR requires a degree of forethought. If you track a four-piece band (bass, drums, rhythm guitar and lead guitar) live, it might make sense to pan the two guitar tracks left and right, but what happens when the lead guitarist plays a solo? Do you move it to the centre, or keep it out wide? Do you have the lead guitarist not play a solo during the live take but instead double the rhythm part, then overdub the solo later? Do you record the rhythm player through two amps, split left and right, and put the lead guitarist in the centre with the vocalist? All of those approaches are workable strategies, but it pays to consider them before tracking.
If you’re mixing but didn’t track the recording, don’t try to force a panning scheme on the track that the arrangement doesn’t support. Better to have a narrow mix with everything in the centre than a completely wacky mix with the acoustic rhythm guitar left and the bass guitar right, simply because you want to make the mix “more stereo”.
Balance – volume
So programme-dependent it’s hardly worth talking about, but here’s one thought. One of the biggest differences I hear between modern mix topologies and those from the 1960s and 1970s is the treatment of simple rhythm accompaniments on acoustic guitar or piano.
There’s a tendency towards giving everything a big sound these days (largely because instruments are usually all tracked separately with close mikes), which tends to make mixes feel a little cluttered and airless. To compensate, engineers end up carving loads of lows and low-mids out of, say, an acoustic rhythm guitar and adding lots of top end to give it “air” and reduce the sense of clutter. Consider miking simple acoustic rhythm guitar parts a little more ambiently and mixing them lower. If the acoustic is the main instrument, that’s different, but if it’s just providing harmonic glue and texture, does it need to be prominently audible in every single moment of the song? Probably not. If you’re after a 1970s feel, listen to how the acoustic rhythm part is treated on (just to think of a few artists from across the spectrum) Pink Floyd, Van Morrison or Eagles records, and try treating it similarly.
The great Satan of modern mix: the compressor. So many ways they can kill your mix stone dead. Let’s take them one at a time.
I don’t do this routinely. Many engineers take a compressor they feel is euphonious and adds a pleasant density or tonal characteristic and use it on the stereo master buss. If you’re going to go down this road, be careful not to overdo it: medium attack and release times and a relatively gentle ratio (1.5:1 or 2:1) will probably sound more transparent than more extreme settings, and remember you can destroy a song’s feel really quickly by not paying attention to the tempo and groove, and applying inappropriate attack and release settings for the song.
I tend to be looking for a classic rather than contemporary sound, so I don’t like to hear a compressor working (certainly not when listening to the sound source within the aggregate mix). Depending on the instrument – and certainly for vocals – I like to apply post-fader compression and solve some of the bigger dynamics issues with automation. The compressor then gently reduces dynamic range of a slightly more idealised version of the performance. I’m working digitally (and therefore not limited by needing to have lots of expensive hardware), and one upside of that is that you can chain compressors a lot more cheaply than you can in the physical world! If I need a lot of gain reduction and don’t want to choke the life out of a source entirely, I’ll set up a couple, typically pre- and post-fader, and let fader moves and the compressors split the work between them.
All engineers approach this differently. I typically set up busses for the drum mix minus toms, toms, acoustic guitars, electric guitars, ooh- and ahh-type backing vocals, and lead and close harmony vocals. I sometimes buss single instruments like piano and bass guitar, but usually only if they’ve been recorded with several mikes or, say, DI and amp for the bass. Drums I tend to hit with a few dB of gain reduction, vocals likewise (especially if I’m looking to glue lead, double tracks and close-harmony tracks all together). Electric guitar is very programme-dependent; distorted guitar I likely won’t compress at all, anywhere down the line. Acoustic guitar and clean electric, I’ll probably use a little to glue things together a little tonally, rather than for significant gain reduction, and use fader moves to make the guitars sit where I want them to.
There’s a long- and widely held belief among mix engineers that subtractive EQ is better than additive EQ. On the whole, I think it’s largely a myth. Those who counsel against additive EQing on the grounds that you’re trying to boost what isn’t there have a point – but only if that is actually what you’re doing, which is rare for anyone who isn’t a total newbie. Trying to add brilliance to a bass drum track by boosting 10k is, of course, absurd. Trying to emphasise the beater impact of a kick drum by making a boost somewhere between 2k and 4k (depending on tuning and beater material) is just emphasising what self-evidently is there.
On the whole, I probably do subtract frequencies more often than boost them, but I’m always happy to make small boosts where needed. For example, I often add a little high end to vocals (above the range of sibilance so things don’t get spitty) and, within a dense mix, I’ll look to give a boost to the audibility of toms by bringing out the stick impact rather than the drum’s fundamental.
In terms of subtractive EQ, I work in fairly conventional ways. I’ll look to take some low mids out of boomy acoustic guitar tracks, and often emphasise the low end of a tom by cutting a little into the mids. If a bass drum is moving a lot of air but feels a little less present than I want, sometimes rolling off below ~60Hz can be helpful (I often do this in conjunction with the beater-frequency boost mentioned earlier).
I’m usually working in quite naturalistic sound worlds, so I want to get a sound in front of a microphone, capture it, and present it in mix transparently, so EQing is not something done in the box after tracking. Rather, the instrument being played, the pickup used, the pedals and amps used, the position of the mike, the choice of mike – all of these are factors in whether I use lots of EQ or none at all.
Hand in hand with the natural-sound thing, the ideal situation, if I’ve been recording a good player on a good instrument and done my job with mike positioning, is that I apply no EQ at all. If I liked the sound in the room, there really should be no reason not to like it on tape, so to speak.
Which I guess leads us to…
The biggest issues I have with a lot of the “5 best tips to help you mix like a pro!” nonsense I see all over the internet is that so many of them present techniques that are sometimes useful (often as Hail Marys more than anything) as regular, staple techniques that you “should” be using. I read one guide the other day that said something to the effect of “You’re going to want to high-pass filter all your tracks to remove the low end”. But why? Can’t I listen to the track first to see if that’s necessary? What if the band knows how to arrange and play their music well, and the tracking engineer recorded them in such a way that there is no build-up of clutter down there?
The best tip I could give anyone is this: don’t do anything simply for the sake of doing something; leave well alone if you can’t account for your intervention; resist the temptation to process just because you can.
Even today, when naturalistic, organic mixes are not particularly fashionable even in indie and acoustic music, a good 80% of mixing lies in the performance and tracking. If a performance is captured well and is solid in terms of sound and technique, the results do still mix themselves. Any engineer who works as both a tracking and mix engineer and doesn’t simply mix would, Steve Albini style, benefit from putting most of their efforts into improving their miking techniques and gain structuring. The mix will then be an infinitely simpler process.