Listening Tips

So, you want to evaluate a pair of loudspeakers, or a new turntable, or you’re trying to decide whether to subscribe to a music streaming service, or its more expensive “hi-fi” version – or just to stick with your CD collection… How should you listen to such a thing and form an opinion about its quality or performance – or your preference?

One good way to do this is to compartmentalise what you hear into different categories or attributes. This is similar to breaking down taste and sensation into different categories – sweetness, bitterness, temperature, etc… This allows you to focus or concentrate on one thing at a time so that you’re not overwhelmed by everything all at once… Of course, the problem with this is that you become analytical, and you stop listening to the music, which is why you’re there in the first place…

Normally, when I listen to a recording over a playback system, I break things down into 5 basic categories:

  • timbre (or spectral balance)
  • spatial aspects
  • temporal behaviour
  • dynamics
  • noise and distortion

Each of these can be further broken down into sub-categories – and there’s (of course) some interaction and overlap – however it’s a good start…

Timbre

One of the first things people notice when they listen to something like a recording played over a system (for example, a pair of headphones or some loudspeakers in a room) is the overall timbre – the balance between the different frequency bands.

Looking at this from the widest point of view, we first consider the frequency range of what we’re hearing. How low and how high in frequency does the signal extend?

On the next scale, we listen for the relative balance between the bass (low frequencies), the midrange, and the treble (high frequencies). Assuming that all three bands are present in the original signal, do all three get “equal representation” in the playback system? And, possibly more importantly: should they? For example, if you are evaluating a television, one of the most important things to consider is speech intelligibility, which means that the midrange frequency bands are probably a little more important than the extreme low- and high-frequency bands. If you are evaluating a subwoofer, then its behaviour at very high frequencies is irrelevant…

Zooming into more details, we can ask whether there are any individual notes sticking out. This often happens in smaller rooms (or cars), often resulting in a feeling of “uneven bass” – some notes in the bass region are louder than others. If there are narrow peaks in the upper midrange, then you can get the impression of “harshness” or “sharpness” in the system. (although words like “harsh” and “sharp” might be symptoms of distortion, which has an effect on timbre…)

Spatial

The next things to focus on are the spatial aspects of the recording in the playback system. First we’ll listen for imaging – the placement of instruments and voices across the sound stage, thinking left – to – right. This imaging has two parameters to listen for: accuracy (are things where they should be?) and precision (are they easy to point to?). Note that, depending on the recording technique used by the recording engineer, it’s possible that images are neither accurate nor precise, so you can’t expect your loudspeakers to make things more accurate or more precise.

Secondly, we listen for distance and therefore depth. Distance is the perceived distance from you to the instrument. Is the voice near or far? Depth is the distance between the closest instrument and the farthest instrument (e.g. the lead vocal and the synth pad and reverberation in the background – or the principal violin at the front and the xylophone at the back).

Next we list for the sense of space in the recording – the spaciousness and envelopment. The room around the instruments can range from non-existent (e.g. Suzanne Vega singing “Tom’s Diner”) to huge (a trombone choir in a water reservoir) and anything in between.

It is not uncommon for a recording engineer to separate instruments in different rooms and/or to use different reverb algorithms on them. In this case, it will sound like each instrument or voice has its own amount of spaciousness that is different from the others.

Also note that, just because an instrument has reverb won’t necessarily make it enveloping or spacious. Listen to “Chain of Fools” by Aretha Franklin on a pair of headphones. You’ll hear the snare drum in your right ear – but the reverb from the same snare is in the centre of your head. (It was a reverb unit with a single channel output, and the mixing console could be used to only place images in one of three locations, Left, Centre, or Right.)

Temporal

The temporal aspects of the sound are those that are associated with time. Does the attack of the harpsichord or the pluck of a guitar string sound like it starts instantaneously, or does it sound “rounded” or as if the plectrum or pick is soft and padded?

The attack is not the only aspect of the temporal behaviour of a system or recording. The release – the stop of a sound – is just as (or maybe even more) important. Listen to a short, dry kick drum (say, the kick in “I Bid You Goodnight” by Aaron Neville that starts at around 0:20). Does it just “thump” or does it “sing” afterwards at a single note – more like a “boommmmmm”… sound?

It’s important to say here that, if the release of a sound is not fast, it might be a result of resonance in your listening room – better known as “room modes”. These will cause a couple of frequencies to ring longer than others, which can, in turn, make things sound “boomy” or “muddy”. (In fact, when someone tells me that things sound boomy or muddy, my first suspect is temporal problems – not timbral ones. Note as well that those room modes might have been in the original recording space… And there’s not much you’re going to be able o do about that without a parametric equaliser and a lot of experience…

Dynamics

Dynamics are partly related to temporal behaviour (a recording played on loudspeakers can’t sound “punchy” if the attack and release aren’t quick enough) but also a question of capability. Does the recording have quiet and loud moments (not only in the long term, but also in the very short term)? And, can the playback system accurately produce those differences? (A small loudspeaker simply cannot play low frequencies loudly – so if you’re listening to a track at a relatively high volume, and then the kick drum comes in, the change in level at the output will be less than the change in level on the recording.)

Noise and Distortion

So far, the 4 attributes I’ve talked about above are descriptions of how the stuff -you-want “translates” through a system. Noise and Distortion is the heading on the stuff-you-don’t-want – extra sounds that don’t belong (I’m not talking about the result of a distortion pedal on an AC/DC track – without that, it would be a Tuck and Patti track…)

However, Noise and Distortion are very different things. Noise is what is known as “program independent” – meaning that it does not vary as a result of the audio signal itself. Tape hiss on a cassette is a good example of this… It might be that the audio signal (say, the song) is loud enough to “cover up” or “mask” the noise – but that’s your perception changing – not the noise itself.

Distortion is different – it’s garbage that results from the audio signal being screwed up – so it’s “program dependent”. If there was no signal, there would be nothing to distort. Note, however, that distortion takes many forms. One example is clipping – the loud signals are “chopped off”, resulting in extra high frequencies on the attacks of notes. Another example is quantisation error on old digital recordings, where the lower the level, the more distortion you get (this makes reverberation tails sound “scratchy” or “granular”).

A completely different, and possibly more annoying, kind go distortion is that which is created by “lossy” psychoacoustic codec’s such as MP3. However, if you’re not trained to hear those types of artefacts, they may be difficult to notice, particularly with some kinds of audio signals. In addition, saying something as broad as “MP3” means very little. You would need to know the bitrate, and a bunch of parameters in the MP3 encoder, in addition to knowing something about the signal that’s being encoded, to be able to have any kind of reasonable prediction about its impact on the audibility of the “distortion” that it creates.

Wrapping up…

It’s important here to emphasise that, although the loudspeakers, their placement, the listening room, and the listening position all have a significant impact on how things sound – the details of the attributes – the recording is (hopefully) the main determining factor… If you’re listening to a recording of solo violin, then you will not notice if your subwoofer is missing… Loudspeakers should not make recordings sound spacious if the recordings are originally monophonic. This would be like a television applying colours to a black and white movie…