Several years ago, companies like Sonible and iZotope made waves with their “AI” plugins which promised to more or less EQ your tracks for you. You select a profile, like acoustic guitar for example, and then it creates a setting that it “thinks” suits your track. The appeal of these is you have an “assistant” who will start your mix for you.
It should be noted this AI technology is not the same technology as large language models like ChatGPT. While I have several ethical reservations about using LLMs, I don’t have quite the same reservations about using “AI” plugins like Sonible’s offerings. Nevertheless, I’ve stopped using them anyway.
I used to reach for a smart EQ for problem-solving and sweetening. If there was a troublesome resonance or if something was recorded on a budget mic or in a weird room, I’d open my smart EQ. It wasn’t long before I started using it all the time on lead elements, almost like a preamp plugin or some other kind of “better-izer.”
That changed while I was mixing my friend Paul Deiss Smith II’s album Time Fabric Home. When Paul sent me his revision notes after my first mix, he said something to the effect of, “It sounds great, but it doesn’t sound like my guitar.” I immediately knew what was the culprit: the smart EQ.
Now what is novel about some of these plugins is something called dynamic spectral processing. This process splits the signal into several, maybe even hundreds of frequency ranges or spectra. The plugin then dynamically attenuates or boosts these ranges to match a spectral profile, usually predetermined by the plugin developer. In any given moment, if the tone of one’s guitar doesn’t match the tone of the profile, the plugin dynamically matches it to the profile.
The problem was that this processing from the smart EQ plugin had rearranged Paul’s instrument’s harmonics. And Paul, being the pro that he is, heard this: that the overtones of his instrument had changed.
I had been warned about this from other mix engineers. All instruments have their own unique harmonic character, and this is what makes an instrument sound like itself. Dynamic spectral processing changes the tone of a signal in a way that can be much more harmful than traditional EQ if you’re not careful. And in this case, even my subtle use of it was too much. A +/-2dB change on each spectrum was enough to make it sound like a different guitar.
So I decided to scrap the smart EQs entirely. After I removed them, I sent the mixes back to Paul, and he reported back that it sounded much better. And I thought so too. In the process, I came to the conclusion that I’d rather not chase the platonic ideal of what a guitar “should” sound like, especially not according to a black-boxed dataset. I’d much rather let a thing sound like itself.
As mixers, our job is not to make something sound perfect. Perfect is samey. Our job is to highlight the emotional character of a recording. Justin Colletti shares a reflection from an Andrew Scheps masterclass: “There’s a spirit in this thing. We picked it because it’s emotionally beautiful. Let’s respect what’s in there, and let’s use it. … That approach leads to texturally interesting mixes that don’t sound like everything else.” It’s perhaps a good thing that your recording doesn’t sound perfect.
Do I think smart EQs are bad then? Not necessarily. They can be instructional to beginners. They can even be problem-solvers when nothing else works. But if anything else can be learned from this story, never insert a plugin automatically (especially an automated one!), and always listen critically to the changes you’re making.
What do you think? Do you use smart plugins or spectral processors? How do you like to use them?
Leave a Reply