Tag Archives: automation

Mix techniques

I’m not a professional mix engineer. However, I see so many articles of the “Five Tips to Improve Your Mixes” type that are just filled with bad advice (or at the very least poorly worded advice) that I sometimes feel like the last sane adult out there. So much reliance on processing. So little attention paid to the integrity of the recorded performance.

So, here are my tips. Or perhaps it’s more accurate to say, this is the stuff I pay attention to when mixing. But first, a disclaimer: I’m only talking about rock, indie and acoustic music mixes, here; I don’t do EDM or pop productions, and little of what I have to say would be relevant if those are the fields you’re working in. If you’re working with acoustic instruments, though, maybe I have something useful to teach.

The spine
The key to mixing an arrangement involving vocals, drums and a bass instrument – that is, almost all rock, indie and pop music – lies in the relationship between the lead vocal, the kick drum, the snare drum and the bass. These instruments and sound sources constitute the spine of your mix, the trunk of the tree.

For backbeat-oriented music, it’s standard practice to mix the drums so the kick and snare have equal weight within the aggregate mix. This doesn’t just mean putting the faders for both at unity and leaving it at that. We’re concerned with their level within the drum mix as a whole; if you have a pair of stereo mikes on the kit, they’re contributing, too, so the relative volume of the snare compared to the kick within that stereo pair will also be a factor (if you’re using spaced overheads, typically the snare is prominent and the kick, while present, is more distant and clicky). Pay less attention to the visual level of the transient and more to the felt volume of the meat of the drum. And don’t compress those transients into nothingness – those transients provide energy and excitement.

Whether the kick or the bass occupies the perceived “lowest” portion of the frequency spectrum will depend on the song and what the bassist is doing. If the material features the bass being played mainly in the second octave, the fundamental of the kick drum will live below the bass’s centre of energy. If the bassist and the kick drum are competing with each other, try rolling off the kick’s low end a little and emphasise the beater (more of that later) to give the kick more clarity and audibility.

I like to think of the vocal as sitting on a platform created by the kick and snare drums. Mix it too loud and the voice seems to float above the music, creating what I call “big giant head” syndrome. To check you’ve got the balance about right, here’s a hack that actually works: slowly turn the master volume down until the music is only just audible. If the last things you can hear are the vocal and the snare drum, that’s usually a good sign.

A lot of rock records have the vocals sunken a little further in the mix (an aesthetic that goes back at least as far as the Rolling Stones). If that’s your thing, make sure the vocal is still legible. You can drop it a long way back (e.g. the Police, early R.E.M., Dire Straits, etc.), but don’t bury the vocal entirely; i

Balance – panning
They used to call recording engineers “balance engineers”, and the term is an instructive one. Achieving a balance between all the elements in the mix on a second-by-second basis is what we do.

That means getting the relative volume levels right, of course, but it also means placing the elements within the stereo field to acheive a pleasing spatial balance. We’ve already discussed the relationship between the kick, snare, bass and vocal. These elements are almost invariably centre panned, and have been since the late 1960s. But what to do with harmonic instruments? Where do they go?

It’s going to depend a lot on what has been recorded for the production, as well as the panning scheme you favour as a mix engineer.

I’m a proponent of LCR panning, meaning elements are panned 100% left, 100% right or centre (except close tom mikes, which I pan to the places that the toms appear in the stereo image). Panning this way means that the instruments retain their relative positions in the stereo field wherever you may be standing in relation to the speakers; a guitar panned 18% left will be perceived as 18% left only as long as you sit right in the middle of the speakers. Move away from that point, and you change your perception of where all non-centre-panned instruments are.

Now, some mix engineers don’t care about that, and they happily pan elements slightly off centre, or nearly all the way left but not quite. Me, I prefer the clarity and stabililty of LCR.

But LCR requires a degree of forethought. If you track a four-piece band (bass, drums, rhythm and lead guitar) as live, it might make sense to pan the two guitar tracks left and right, but what happens when the lead guitarist plays a solo? Do you move it to the centre? Keep it out wide? Have the guitarist not play a solo during the live take but instead double the rhythm part, then overdub the solo later? Record the rhythm player through two amps, split left and right, and put the lead guitarist in the centre with the vocalist? All are defensible strategies, but it pays to consider them before tracking. If you’re just mixing and you’ve had no say in what was tracked, don’t try to force a panning scheme on the track that the arrangement doesn’t support. Better to have a narrow mix with everything in the centre than a completely wacky mix with the acoustic rhythm guitar left and the bass guitar right, simply because you want to make the mix “more stereo”.

Balance – volume
So programme-dependent it’s hardly worth talking about, but here’s one thought. One of the biggest differences I hear between modern mix topologies and those from the 1960s and 1970s is the treatment of simple rhythm accompaniments on acoustic guitar or piano.

There’s a tendency towards giving everything a big sound these days (largely because instruments are usually all tracked separately with close mikes), which tends to make mixes feel cluttered and airless. To compensate, engineers end up carving loads of lows and low-mids out of, say, an acoustic rhythm guitar and adding lots of top end to give it “air” and reduce the sense of clutter. Consider miking simple acoustic rhythm guitar parts a little more ambiently and mixing them lower. If the acoustic is the main instrument, that’s different, but if it’s just providing harmonic glue and texture, does it need to be prominently audible in every single moment of the song? Probably not. If you’re after a 1970s feel, listen to how the acoustic rhythm part is treated on (just to think of a few artists from across the spectrum) Pink Floyd, Van Morrison or Eagles records, and try treating it similarly.

Compression
Ah, the great Satan of modern mix. The humble compressor. So many ways for them to kill your mix stone dead. Let’s take them one at a time.

Mix-buss compression
I don’t do this usually. Many engineers take a compressor they feel is euphonious and adds a pleasant density or tonal characteristic, and use it on the stereo master outs. If you’re going to go down this road, be careful not to overdo it: medium attack and release times and a relatively gentle ratio (1.5:1 or 2:1) will probably sound more transparent  than more extreme settings, and remember you can destroy a song’s feel really quickly by not paying attention to the tempo and groove, and applying inappropriate attack and release settings for the song.

Channel compression
I tend to be looking for a classic rather than contemporary sound, so I don’t like to hear a compressor working (certainly not when listening to the sound source within the aggregate mix). Depending on the instrument – and certainly for vocals – I like to apply post-fader compression and solve some of the bigger dynamics issues with automation. The compressor then gently reduces dynamic range of a slightly more idealised version of the performance. I’m working digitally (and therefore not limited by needing to have lots of expensive hardware), and one upside of that is that you can chain compressors a lot more cheaply than you can in the physical world! If I need a lot of gain reduction and don’t want to choke the life out of a source entirely, I’ll set up a couple, typically pre- and post-fader, and let fader moves and the compressors split the work between them.

Buss compression
All engineers approach this differently. I typically set up a buss for drums (minus toms), toms, acoustic guitars, electric guitars, ooh- and ahh-type backing vocals, and lead and close harmony vocals. I may buss single instruments like piano and bass guitar, but usually only if they’ve been recorded with several mikes or, say, DI and amp for the bass. Drums I tend to hit with a few dB of gain reduction, vocals likewise (again maybe post-fader – it depends on the dynamic of the performance). Electric guitar is very programme-dependent; distorted guitar I likely won’t compress at all, anywhere down the line. Acoustic guitar and clean electric, I’ll probably use a little to glue things together a little tonally, rather than for significant gain reduction, and use fader moves to make the guitars sit where I want them to.

Equalisation
There’s a long- and widely held belief that subtractive EQ is better than additive EQ. It is, I think, a myth. Those who counsel against additive EQing on the grounds that you’re trying to boost what isn’t there have a point – but only if that is actually what you’re doing, which is rare for anyone who isn’t a total newbie. Trying to add brilliance to a bass drum track by boosting 10k is absurd. Trying to emphasisr the beater impact of a kick drum by making a boost somewhere between 2k and 4k (depending on tuning and beater material) is just emphasising what self-evidently is there.

On the whole, I probably do subtract frequencies more often than boost them, but I’m always happy to make small boosts where needed. For example, I often add a little high end to vocals (above the range of sibilance so things don’t get spitty) and, within a dense mix, I’ll look to give a boost to the audibility of toms by bringing out the stick impact rather than the drum’s fundamental.

In terms of subtractive EQ, I work in fairly conventional ways. I’ll look to take some low mids out of boomy acoustic guitar tracks, and often emphasise the low end of a tom by cutting a little into the mids. If a bass drum is moving a lot of air but feels a little less present than I want, sometimes rolling off below ~60Hz can be helpful (I often do this in conjunction with the beater-frequency boost mentioned earlier).

I’m usually working in quite naturalistic sound worlds, so I want to get a sound in front of a microphone, capture it, and present it in mix transparently, so EQing is not something done in the box after tracking. Rather, the instrument being played, the pickup used, the pedals and amps used, the position of the mike, the choice of mike – all of these are factors in whether I use lots of EQ or none at all.

Hand in hand with the natural-sound thing, the ideal situation, if I’ve been recording a good player on a good instrument and done my job with mike positioning, is that I apply no EQ at all. If I liked the sound in the room, there really should be no reason not to like it on tape, so to speak.

Which I guess leads us to…

Conclusion
The biggest issues I have with a lot of the “5 best tips to help you mix like a pro!” nonsense I see all over the internet is that so many of them present techniques that are sometimes useful (often as hail Marys more than anything) as regular, staple techniques that you “should” be using. I read one guide the other day that said something to the effect of “You’re going to want to high-pass filter all your tracks to remove the low end”. But why? Can’t I listen to the track first to see if that’s necessary? What if the band knows how to arrange their music and the tracking engineer recorded them in such a way that there is no build-up of clutter down there?

The best tip I could give anyone is this: do nothing simply for the sake of doing something; leave well alone if you can’t account for your intervention; resist the temptation to process just because you can. A good 80% of mixing lies in the performance and tracking – if a performance is captured well and is solid in terms of sound and technique, the results mix themselves. Any engineer who works as a tracking and mix engineer and doesn’t simply mix would, Steve Albini style, benefit from putting most of their efforts into improving their miking techniques and gain structuring. The mix will then be an infinitely simpler process.

Advertisements

On Recalls & Mixing in the Digital Domain

At the moment I’m working quite hard on a couple of recordings I’ve got in progress. I’m a one-man-band kind of guy, playing all the instruments, and recording and mixing the tracks myself. That necessarily leads to a certain way of working if, like me, you have a full-time day job. I fit recording and mixing work into spare hours and half-hours whenever they occur, or save up a few tasks to justify the effort of setting up a drum kit, or a guitar-and-amp rig, and placing microphones. In the past, when I was a freelancer and worked from home, I could block out chunks of time to record pretty much whenever I wanted to, and could have the recording of a song mixed within 24 hours of writing it. Nowadays it takes a few weeks usually. It’s a drawn-out, accretive process.

This way of working is dependent on the ability of DAW software to recall every aspect of the audio project for me. I load the project file in my DAW of choice (Cubase), and every channel is the way I left it: all the inserts are there with exactly the same settings I was using before, the tracks are all routed to the same busses, all my automation data is the way it was last time. What would take hours of work in the analogue realm is reduced to the 30 seconds or so my laptop and edition of Cubase require to load a complicated project.

The implications of this technology for the way music is mixed and the way it sounds when you hear it on the radio are enormous, and are probably only truly understood by recording engineers, especially those who learned their trade during the analogue era.

Almost any record you care to name from the pre-digital era (digital recording that is, not digital playback) has flaws or idiosyncrasies in it that could have been ironed out with one last recall session, but which weren’t worth the time and effort required to do the recall. If you were working on analogue tape with a console, doing a recall to make a couple of tweaks to the vocal level was an expensive luxury few could afford. To allow the tweaks to be made, the engineer or the engineer’s assistant would have to reconstruct the mix on the desk, using notes and snapshots taken during the previous session. Hardware audio processors would have to be re-inserted over the correct channels, tracks bussed appropriately, EQ settings precisely dialled in. It took time, and it wasn’t always easy to get everything exactly the same. An engineer skilled at quickly and accurately recalling a mix was worth his or her weight in gold to a producer or mixer.

Even so, a band was unlikely to get the producer to consent to a recall unless the producer felt the tweaks the band wanted were justified. A recall meant 3-4 hours’ work, and time is money in the recording studio, as it is anywhere else. Digital mixing consoles began to include some recall functions in the 1990s, which sped up the process a bit, but these desks rarely sounded as good as the real analogue deal, and they only went so far: no console can actually plug in an LA2A for you.

It was the DAW that allowed the situation we have now, where any mix can be perfectly recalled, tweaked and printed (that is, mixed down to stereo) whenever the band or producer want. As with anything else, it’s a double-edged sword. When listening to other people’s music, I may decry the primped sterility of the end result: recordings that have been airbrushed to within an inch of their lives, where every instrument and vocal performance is in fixed audibility at all times in a way that could never happen in a live performance captured to tape, and with no technical flaws or blemishes, no matter how tiny, allowed to make it through to the master. Yet I’m dependent on that same technology to make any recordings at all, and I’m as guilty as the next man of stewing over a mix for several days before going back in and systematically fixing all the things that bugged me about the last version.

So what else is new? Replace “digital mixing” with “CGI” and let a movie buff give you their cri de coeur on the superiority of in-camera practical effects work. This is simply the world we live in. When you next hear a brand-new recording straight after a classic on your iPod or on the radio, listen to the differences. Feel them. I know which I prefer to listen to, and sadly, I also know which kind of recordings I’m making.

recall
Doing a recall in 2016

When did the eighties become the eighties? or, transition periods in mix fashion

I had an interesting conversation with Yo Zushi the other night about fashion in music production and mix.

Both of us have a soft spot for Boz Scaggs and his super-cool ultra-smooth blue-eyed soul, and I remarked on Middle Man being one of the best-sounding records I could think of. For all its song-for-song quality, Scaggs’s masterpiece, Silk Degrees, doesn’t have the drum sound that graces Middle Man cuts like JoJo. It’s precise, it’s powerful, and it seems to me to retain far more of the sound you hear when you’re seated on the drum stool

Middle Man, released in 1980, was recorded at the back end of 1979, using old-school analogue technology. By then, recording and mix engineers had had a few years to become familiar with the technology of 24-track analogue, learn how to compensate for the reduced track width caused by cramming that many tracks into two inches of tapes, discover ways to warm up the relatively sterile transistor-based desks that were now the rule rather than the exception, and begin to derive the benefits of new automation technology, which allowed for more precise mixing, particularly of vocals (automation allows you to program your fader moves in advance, rather than having to do them on the fly).

So Middle Man, produced by Bill Schnee (who’d engineered Steely Dan’s Aja three years before) came out during a sort of period of grace. It was also a period where fashions were changing. The tight, dry West Coast sound of Middle Man was falling out of favour, especially in New York and in the UK: Jimmy Iovine (an East Coast guy through and through, even when he was working in LA) had already made Darkness on the Edge on Town (at the Record Plant New York) and Damn the Torpedoes (at Sound City in Van Nuys), and soon he’d apply that same absurd cannonball-hits-crash-mat drum sound to Stevie Nicks’s Bella Donna. In the UK, meanwhile, Hugh Padgham had stumbled across the gated reverb effect while recording Peter Gabriel’s third solo album. In 1981 Phil Collins would unleash his gated mega drums on In the Air Tonight and it would be all over for the Californian aesthetic.

Except, no. I wouldn’t.

Things aren’t that neat. There were still plenty of records made in the first few years of the 1980s with the dead sound associated with the 1970s (think of something like Michael McDonald’s 1982 hit album If That’s What it Takes, which sonically speaking could have been made the same year as Aja), and a lot of the things we think of as being key to the eighties sound were invented so late in the 1970s or so early in the 1980s that their true impact wasn’t felt until the decade was well underway: the Linn drum machine, the Fairlight CMI, the Emulator, the Synclavier, digital reverb units like the Lexicon 224 and so on.

The same was true at the start of the 1990s. Sure, Matthew Sweet’s Girlfriend, with its startlingly bone-dry sound, may have pointed to the way things were going and acted as a necessary corrective to the never-ending decays on vocals and snare drums that were so prevalent at the arse end of the eighties. Sure, Bob Clearmountain’s mixes were coming back down to earth (by 1993 he’d be doing his best ever work on Crowded House’s Together Alone) after his big bam booming period mixing Hall & Oates, Huey Lewis and Bryan Adams. And sure Andy Wallace’s Nevermind mix was, despite its use of reverb samples, far drier than it could have been in someone else’s hands. But as late as 1993, Big Head Todd and the Monsters could have a platinum record with an album that deployed extremely prominent gated reverb on the drums* That’s to say nothing of Brendan O’Brien seemingly tracking Pearl Jam’s Ten in a cave**.

At some point a trend gets overdone and a small vanguard starts going the other way to distinguish themselves from the herd. The question is, in our own era, who’s going to do it and what’s going to change?

big head todd
Promo shot, circa Sister Sweetly: Todd Park Mohr, Brian Nevin, Rob Squires

*If you’re not American – hell, if you weren’t living in the Mountain States in the early 1990s – you may not be aware of Big Head Todd and the Monsters. Let me assure you, then, that this was not a case of a behind-the-times band from the boondocks getting lucky: Sister Sweetly was produced and mixed by Prince sideman David Z at the Purple One’s own Paisley Park studio. The record, for whatever reason, just completely ignored the production trends of the preceding two years or so, and must have sounded almost laughably old-fashioned the moment it was released. Nonetheless it’s a decent record and it sold a million in the US.

**The Pearl Jam guys disliked the mix enough that the 2009 re-release included a remix of the whole album. It’s noticeably drier.