A Simple Guide to Equalisers

Equalisers are an interesting topic. Many people will steer clear of them while others love the option to customise every song, album, pair of speakers or pair of headphones to their own personal tastes. To help you make an informed decision about whether to use an equaliser and how to use an equaliser, here are some key facts.

Impact on Sound Quality

Firstly, electronic equalisers like those in iTunes and in your iPod will generally reduce the quality of the sound. Depending on the headphones or speakers you’re using and depending on how loud you listen to the music, you may not notice the difference so don’t necessarily write off the use of an equaliser based on this fact – you need to decide based on the merits for and against. For some people, a slight reduction in quality might be worth the improvement to the overall tone of the sound. There are also some ways to use an equaliser that are less likely to impact on the sound quality – I’ll discuss these later.

The main reason that equalisers decrease sound quality is the power required to boost audio output by even a small amount. A 3dB increase to the sound (or selected frequency) actually takes double the power! This generally means that you’re pushing the limits of the in-built amplifier of your computer or portable player which leads to a thing called clipping. Without going into detail, clipping refers to the top of the sound wave being cut off because the amplifier doesn’t have the power to create it. It’s a little bit like accelerating in your car until the engine hits the redline and the power disappears, resulting in a rapid loss of performance and potential damage to the equipment.

To minimise the impact on sound quality, you can employ a technique that’s often referred to as subtractive equalising. All this means is that you drag the sliders down rather than push the sliders up – I’ll explain.

iTunes EQ

See the image on the right? That’s the standard iTunes equaliser panel and it’s typical of most EQ setups where the sliders are sitting in the middle at “0dB” meaning that there is no change to the standard amplification of each frequency. If you move a slider up by 1 “notch”, you’re increasing that frequency by 3dB and asking the amplifier to provide double the power at that frequency. The result is generally distortion around that frequency. It’s much easier to set an EQ by increasing the frequencies you want more of, but you’ve probably gathered that this will hurt the sound quality.

EQ settings from -12dB

Here’s the alternative: rather than increasing the frequencies you want more of, decrease all the rest! The simplest way to do this is to start by dragging all of your EQ sliders down to the bottom (-12dB). At this time you might need to turn up the master volume of your device to get the sound back to an enjoyable level, but this is fine because the master volume doesn’t decrease quality. Now you’re ready to adjust your EQ. Start using the sliders to increase the frequencies you need up to a maximum of 0dB. Avoid going above 0dB for any frequency because it will instantly decrease sound quality.

All sliders increased (Maximum 0dB)

Once you’ve got the sound the way you want it, try to increase the sliders so that the shape of your EQ stays the same, but so that the highest slider is right on 0dB (see the last EQ image).

Note: one slight issue you might face with this technique is a lack of maximum volume. If your headphones or speakers are hard to drive and you need nearly 100% volume to get the sound you want, this technique may lead to insufficient sound levels. If that’s the case, you might want to look into extra amplification or different headphones or speakers.

The final dilemma you might be facing is which slider to increase to get the sound you want. The next section should help…

What Does Each Frequency Change?

So you’ve opened your EQ settings and you’re ready to perfect the sound signature for your ears only, but where to start? Which slider to slide?

In the end, it’s all experimentation for the fine details, but here are some clues about where to start. (I’m using the iTunes frequency points as a reference to make it easy for comparison.) I’d recommend using the following information by listening to a range of tracks that you’re familiar with. See if you can identify what’s “missing” from the sound based on the descriptions below and then add a little at a time to see if it helps.

32 Hz – This is subwoofer territory – the bottom end of the bass range. As much as we all love it, sometimes it’s better to drop it out or leave it flat if your speakers can’t this depth of bass. Bass is difficult for amplifiers to sustain so you can give your amp some breathing room by dropping this away if you can’t hear it anyway.

64 Hz – This is the part of the bass that we feel as much as hear. This is a great frequency to boost if you want to feel a bit more bass vibration. Increasing this will give your music more kick at the bottom end.

125 Hz – This is musical bass. If you’re listening to melodic bass guitar riffs, this is where the action is. It’s a good frequency to increase to emphasise the musicality and accuracy in bass rather than the rumble or the mass of the bass.

250 Hz – Although there’s no simple definition of bass vs midrange vs treble, I think of 250 Hz as the crossover point between bass and midrange. At 250 Hz we’re starting to increase the bottom end of male vocals and the lower range of instruments like guitars. Beware though – increasing the midrange sliders can very quickly make your music sound muffled or “canned” (like it’s being played through a tin can”. I would rarely increase this frequency, but might consider reducing it slightly to open the sound up a bit and make it more “airy” and spacious.

500 Hz – Like 250 Hz, 500 Hz is the realm of muffled sound. It covers male vocals and the middle of instrumentation. It’s impact on most music is a sense of muffling – like you’ve covered everything in cotton wool.

1 kHz – This is the realm of vocals, instruments like guitars and saxophones and the snare drum. It can bring brightness to the midrange, but can also start making the sound a bit tinny.

2 kHz – Around 1 – 2 kHz is where we start venturing into the realm of treble. We’re still in the world of vocals and instruments here, but we’re getting towards the top-end so it’ll cause vocals to sound a bit more nasal and increase the perception of the texture of voices – things like breathiness and raspiness. It also can make sound very tinny.

4 kHz – This is the frequency that’s most prominent in sounds like “sh” and “ssssss”. It’s also a part of sounds like cymbal hits and the upper end of a snare drum’s sound. Adding emphasis to the 4 kHz range can very quickly make music harsh on the ears and unpleasant to listen to. That said, using it carefully can bring clarity and brightness to vocals and percussion.

8 kHz – This frequency is pure treble. This is what is most prominent if you turn the “treble” dial up. It influences the very upper end of sounds like “sh” and “ssss” and it has a major impact on percussion such as snares and cymbals. It is great to use to add brightness to the sound, but can also get uncomfortable if overused.

16 kHz – Given that 20 kHz is considered the upper end of human hearing, this is obviously the pointy end of the treble band. It mostly affects cymbals and similar sounds, but it also picks up the brightness and detail in the texture of certain instruments. For example, increasing the 16 kHz slider (and/or the 8 kHz slider to a degree) will enhance the sound of the plectrum hitting the strings of a strummed guitar. Like the 8 kHz slider, the 16 kHz slider is a great way to add brightness and tends to be more gentle on our ears. That’s not to say it’s not just as loud, but generally it doesn’t sound as harsh. Incidentally, 16 kHz is also at the end of the hearing range where we lose our hearing first so we might appreciate a slight boost to this area as we get older and lose our hearing.

I hope this has helped you to get more from your music collection, favourite speakers or portable player!

Understanding MP3s (and other compressed music) – Part 2

Welcome to Part 2 of my series of posts about the pros and cons of compressed audio. If you haven’t read Part 1, it’d be a good idea. Here’s a link: Understanding MP3s (and other compressed music) – Part 1

Wielding the Eraser

I explained in Part 1 that compression means pulling out sounds that we won’t actually hear, but think about this… The music is like a painting that we “see” with our ears. Compressing music is the equivalent to taking an eraser to the Mona Lisa. It’s like saying, “No-one will notice this brush stroke of stray colour or this tiny bit of shading.” Perhaps that’s true and, to a degree, no-one would notice, but at some point the whole painting’s just going to lose something. It’ll lose a little bit of soul. Sure, you might not pick exactly which parts are missing, but you’ll know something’s not right. Here’s an example:

Notice how the sky in the second image looks unnatural and full of lines? That’s because the process of compressing has removed some of the subtle shades of blue and replaced them with wider bands of other shades. For example, let’s number the different shades 1.1, 1.2, 1.3 and 1.4. During the compression process we would replace shade 1.2 with a second band of 1.1 and replace 1.4 with a second band of 1.3. Now that blue sky would be made of bands of shades 1.1, 1.1, 1.3, 1.3. You can see the evidence of this above in the second image.

So looking at the example photos, it’s clear that they’re both the same photo, but if you had to choose one to print and frame, I’m guessing you’d choose the first one because it’s closer to real life and therefore more pleasing to the eye. The same goes for music.

Think of music as a complex bunch of vibrations making a particular range of patterns. Any little detail you remove from those vibrations will permanently alter the overall “picture”. You’ll still recognise the sound or the song, but it won’t actually sound identical to the original.

Let’s talk about the ear again. Remember my description of how we hear? The ear perceives music like the eyes perceive a painting. You take it all in at once, you don’t pick out a particular colour here and a particular texture there, you just see it as a picture. When we compress sound we permanently alter the “picture” as if we had taken to it with an eraser. To our ears, the result is no different to the photo above on the right. It might not be as dramatic (depending on the level of compression), but it’s essentially the same. You don’t notice a loss of individual sounds, you notice a loss of overall quality and realism.

Here’s one final visual version to show you what I mean. The following charts are spectrograms that show sound as colour. The darker the colour, the louder the sound and the higher up the colour appears, the higher pitch the sound is. A bass guitar shows up down the bottom while a violin shows up further towards the top. There are 2 lines in each chart – these are the left and right stereo channels.

Spectogram - lossless

"This is How a Heart Breaks" - no compression

"This is How a Heart Breaks" - moderate compression

"This is How a Heart Breaks" - mid-high compression (128 kbps)

Notice the density of the yellow and orange colours reduces as you get more compression? The more blue you see, the less of the musical “picture” is still intact. You might also notice that there is more variety and clarity in the colours on the top chart and the colours all get more “blurry” as you move down the charts. That’s the effect of averaging things out. If you look at the first spectrogram and then the second, you might notice that the second one looks like a slightly out-of-focus copy of the first one.

By the time we get to 128 kbps, nearly every high frequency sound is removed. That’s because we lose those hearing at these frequencies first and are less likely to notice the missing sound… or at least that’s the theory. The key thing to notice here is that the musical pictures are different. This is the most visual representation of sound that I can provide and it illustrates exactly how the musical “picture” is gradually erased by compression.

In the Final Installment

Now that you know how we perceive sound and how compression works, you’re all ready to read about why compressed music loses its “magic”. In Part 3, I’ll explain a bit harmonics and their role in creating the soul of the music. I’ll also sum up what this all means when it comes to choosing the level of compression that’s right for you.

As always, I hope you’re enjoying this information and I welcome any feedback or questions you might have.

Ready for Part 3?