B&O Tech: Visual Analogies to Problems in Audio

#23 in a series of articles about the technology behind Bang & Olufsen loudspeakers

 

Audio people throw words around like “frequency” and “distortion” and “resolution” and “” without wondering whether anyone else in the room (a) understands or (b) cares. One of the best ways to explain things to people who do not understand but do care is to use analogies and metaphors. So, this week, I’d like to give some visual analogies of common problems in audio.

 

Let’s start with a photograph. Assuming that your computer monitor is identical to mine, and the background light in your room is exactly the same as it is in mine, then you’re seeing what I’m seeing when you look at this photo.

original

Let’s say that you, sitting there, looking at this photo is analogous to you, sitting there, listening to a recording on a pair of loudspeakers or over headphones. So what happens when something in the signal path messes up the signal?

 

Perhaps, for example, you have a limited range in your system. That could mean that you can’t play the very low and/or high frequencies because you are listening through a smaller set of loudspeakers instead of a full-range model. Limiting the range of brightness levels in the photo is similar to this problem – so nothing is really deep black or bright white. (We could have an argument about whether this is an analogy to a limited dynamic range in an audio system, but I would argue that it isn’t – since audio dynamic range is limited by a noise floor and a clipping level, which we’ll do later…) So, the photo below “sounds” like an audio system with a limited range:

limited_range

Of course, almost everything is there – sort of – but it doesn’t have the same depth or sparkle as the original photo.

 

 

Or what if you have a noisy device in your signal chain For example, maybe you’re listening to a copy of the recording on a cassette tape – or the air conditioning is on in your listening room. Then the result will “sound” like this:

noise

As you can see, you still have the original recording – but there is an added layer of noise with it. This is not only distracting, but it can obscure some of the more subtle details that are on the same order of magnitude as the noise itself.

 

 

In audio, the quietest music is buried in the noise of the system (either the playback system or the recording system). On the other extreme is the loud music, which can only go so loud before it “clips” – meaning that the peaks get chopped off because the system just can’t go up enough. In other words, the poor little woofer wants to move out of the loudspeaker by 10 mm, but it can only move 4 mm because the rubber holding on to it just can’t stretch any further. In a photo, this is the same as turning up the brightness too much, resulting in too many things just turning white because they can’t get brighter (in the old days of film, this was called “blowing out” the photo), as is shown below.

clipping

 

This “clipping” of the signal is what many people mean when they say “distorted” – however, distortion is a much broader range of problems then just clipping. To be really pedantic, any time the output of a system is not identical to its input, then the signal is distorted.

 

 

A more common problem that many people face is a modification of the frequency response. In audio, the frequency is (very generally speaking) the musical pitch of the notes you’re hearing. Low notes are low frequencies, high notes are high frequencies. Large engines emit low frequencies, tiny bells emit high frequencies. With light, the frequency of the light wavicle hitting your eyeball determines the colour that you see. Red is a low frequency and violet is a high frequency (see the table on this page for details). So, if you have a pair of headphones that, say, emphasises bass (the low frequencies) more than the other areas, then it’s the same as making the photo more red, as shown below.

freq_response

 

 

 

Of course, not all impairments to the audio signal are accidental. Some are the fault of the user who makes a conscious decision to be more concerned with convenience (i.e. how many songs you can fit on your portable player) than audio quality. When you choose to convert your CD’s to a “lossy” format (like MP3, for example), then (as suggested by the description) you’re losing something. In theory, you are losing things that aren’t important (in other words, your computer thinks that you can’t hear what’s thrown away, so you won’t miss it). However, in practice, that debate is up to you and your computer (and your bitrate, and the codec you’ve chosen, and the quality of the rest of your system, and how you listen to music, and what kind of music you’re listening to, and whether or not there are other things to listen to at the same time, and a bunch of other things…) However, if we’re going to make an analogy, then we have to throw away the details in our photo, keeping enough information to be moderately recognisable.

limited_res

As you can see, all the colours are still there. And, if you stand far enough away (or if you take off your glasses) it might just look the same. But, if you look carefully enough, then you might notice that something is missing… Keep looking… you’ll see it…

 

 

So, as you can see, any impairment of the “signal” is a disruption of its quality – but we should be careful not to confuse this with reality. There are lots of people out there who have a kind of weird religious belief that, when you sit and listen to a recording of an orchestra, you should be magically transported to a concert hall as if you were there (or as if the orchestra were sitting in your listening room). This is silly. That’s like saying when you sit and watch a re-run of Friends on your television, you should feel like you’re actually in the apartment in New York with a bunch of beautiful people. Or, when you watch a movie, you feel like you’re actually in a car chase or a laser battle in space. Music recordings are no more of a “virtual reality” experience than a television show or a film. In all of these cases (the music recording, the TV episode and the film), what you’re hearing and seeing should not be life-like – they should be better than life. You never have to wait for the people in a film to look for a parking space or go out to pee. Similarly, you never hear a mistake in the trumpet solo in a recording of  Berlin Philharmonic and you always hear Justin Bieber singing in tune. Even the spatial aspects of an “audiophile” classical recording are better-than-reality. If you sit in a concert hall, you can either be close (and hear the musicians much louder than the reverberation) or far (and hear much more of the reverberation). In a recording, you are sitting both near and far – so you have the presence of the musicians and the spaciousness of the reverb at the same time. Better than real life!

So, what you’re listening to is a story. A recording engineer attended a music performance, and that person is now recounting the story of what happened in his or her own style. If it’s a good recording engineer, then the storytelling is better than being there – it’s more than just a “police report” of a series of events.

To illustrate my point, below is a photo of what that sinking WWII bunker actually looked like when I took the photo that I’ve been messing with.

reality

 

Of course, you can argue that this is a “better” photo than the one at the top – that’s a matter of your taste versus mine. Maybe you prefer the sound of an orchestra done recorded with only two microphones played through two loudspeakers. Maybe you prefer the sound of the same orchestra recorded with lots of microphones played through a surround system. Maybe you like listening to singers who can sing. Maybe you like listening to singers who need auto tuners to clean up the mess. This is just personal taste. But at least you should be choosing to hear (or see) what the artist intended – not a modified version of it.

This means that the goal of a sound system is to deliver, in your listening room, the same sound as the recording engineer heard in the studio when he or she did the recording. Just like the photos you are looking at on the top of this page should look exactly the same as what I see when I see the same photo.

 

 

 

 

B&O Tech: Listening Tips & Tricks

#22 in a series of articles about the technology behind Bang & Olufsen loudspeakers

 

Let’s say that you go to the store and you listen to a pair of loudspeakers with some demo music they have on a shelf there, and you decide that you like the loudspeakers, so you buy them.

Then, you take them home, you set them up, and you put one of your recordings, and you change your mind – you don’t like the loudspeakers.

What happened? Well, there could be a lot of reasons behind this.

 

Tip #1: Loudness

In the last article, I discussed why a “loudness” function is necessary when you change the volume setting while listening to your system. The setup of this article discussed the issue of Equal Loudness Contours, shown again as a refresher in Figure 1, below.

Fig 2: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phone increments, according to ISO226.
Fig 1: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phon increments, according to ISO226.

Let’s say that, when you heard the loudspeakers at the store, the volume knob was set so that, if you had put in a -20 dB FS, 1 kHz sine wave, it would have produced a level of  70 dB SPL at the listening position in the store. Then, let’s say that you go home and set the volume such that it’s about 10 dB quieter than it was when you heard it at the store. This means that, even if you listen to exactly the same recording, and even if your listening room at home were exactly the same as the room at the store, and even if the placement of the loudspeakers and the listening position in your house were exactly the same as at the store, the loudpspeakers would sound different at home than at the store.

Figure 2 below shows the difference between the 70 phon curve from Figure 1 (sort of what you heard at the store) and the 60 phon curve (sort of what you hear at home, because you turned down the volume). (To find out which curve is which in Fig 1, the phon value of the curve is its value at 1 kHz.)

 

Fig 2:
Fig 2: The normalised difference between the 60 phon curve and the 70 phon curve from Fig 1.

 

As you can see in Figure 2, by turning down the volume by 10 dB, you’ve changed your natural sensitivity to sound – you’re as much as 5 or 6 dB less sensitive to low frequencies and also less sensitive to the high end by a couple of dB. In other words, by turning down the volume, even though you have changed nothing else, you’ve lost bass and treble.

In fact, even if you only turned down the volume by 1 dB, you would get the same effect, just by a different amount, as is shown in Figure 3.

 

Fig 3: The difference between the 67 (red), 68 (blue), and 69 (black) phon curve and the 70 phon curve. Note that these have been normalised to remove the frequency-independent gain differences. Only frequency-dependent sensitivity differences are shown.
Fig 3: The difference between the 67 (red), 68 (blue), and 69 (black) phon curve and the 70 phon curve. Note that these have been normalised to remove the frequency-independent gain differences. Only frequency-dependent sensitivity differences are shown.

 

So, as you can see here, even by changing the volume knob by 1 dB, you change the perceived frequency response of the system by about 0.5 dB in a worst case. The quieter you listen, the less bass and treble you have (relative to the perceived level at 1 kHz).

So, this means that, if you’re comparing two systems (like the loudspeakers at the store and the loudspeakers at home, or two different DAC’s or your system before and after you connect the fancy new speaker wire that you were talked into buying), if you are not listening at exactly the same level, your hearing is not behaving the same way – so any differences you hear may not be a result of the system.

Looking at this a different way, if you were to compare two systems (let’s say a back-to-back comparison of two DAC’s) that had frequency response differences like the ones shown in Figure 3, I would expect that you might expect that you could hear the difference between them. However, this is YOUR frequency response difference, just by virtue of the fact that you are not comparing them at the same listening level. The kicker here is that, if the difference in level is only 1 dB, you might not immediately hear one as being louder than the other – but you might hear the timbral differences between them… So, unless you’ve used a reliable SPL meter to ensure that they’re they same level, then they’re probably not the same level – unless you’re being REALLY careful – and even then, I’d recommend being more careful than that.

This is why, when researchers are doing real listening tests, they have to pay very careful attention to the listening level. And, if the purpose of the listening test is to compare two things, then they absolutely must be at the same level. If they aren’t, then the results of the entire listening test can be thrown out the window – they’re worthless.

It’s also why professionals who work in the audio industry like recording engineers, mastering engineers, and re-recording engineers always work at the same listening level. This, in part, ensures that they have consistency in their mixes – in other words, they have the same bass-midrange-treble balance in all their recordings, because they were all monitored at the same listening level.

Tip #2: Recordings

If you were selling your house, and you got a call from your real estate agent that there were some potential buyers coming tomorrow to see your place, you would probably clean up. If you were really keen, not only would you clean up, but you would put out some fresh flowers in a vase, and, a half-an-hour before your “guests” arrived, you’d be pulling a freshly-based loaf of bread out of the oven (because there’s nothing more welcoming than walking into a house that smells like freshly-baked bread…) You would NOT leave the bathroom in a mess, your bed unmade, dirty dishes in the sink, and yesterday’s dirty socks on the floor. In short, you want your house to look its best – otherwise you won’t get people through the front door (unless the price is REALLY good…)

Similarly, if you worked in a shop selling loudspeakers, part of your job is to sell loudspeakers. This means that you spend a good amount of time listening to a lot of different types of music on the loudspeakers in your shop. Over time, you’ll find that some recordings sound better than others for some reason that has something to do with the interactions between the recordings, the loudspeakers, the room’s acoustics, and your preferences. If you were a smart salesperson, you would make a note of the recordings that sound “bad” (for whatever reason) and you would not play them for potential customers that come into your store. Doing so would be the aural equivalent of leaving your dirty socks on the floor.

So, this means that, if you are the customer in the shop, listening to a pair of loudspeakers that you may or may not buy, you should remember that you’re probably going to be presented with a best-case scenario. At the very least, you should not expect the salesperson to play a recording that makes the loudspeakers sound terrible. Of course, this might mean many things. For example, it might mean that the loudspeakers are GREAT – but if they’re being used to play a really bad recording that you’ve never heard before, then you might think that the reason it sounds bad is the loudspeakers, and not the recording. So, you’ll walk out of the shop hating the loudspeakers instead of the recording.

So, the moral of the story here is simple: if you’re going to a shop to listen to a pair of loudspeakers, bring your own recordings. That way, you know what to expect – and you’ll test the loudspeakers on music that you like. Even if you bring just one CD and listen to just one song – as long as the song is one that you’ve hear A LOT, then you’re going to get a much better idea of how the loudspeakers are behaving than if you let the salesperson choose the demo music. In a perfect reality, you’ll put on your song, and your jaw will drop while you think “I’ve NEVER heard it sound this good!”.

 

Tip #3: Room Acoustics

It goes without saying that the acoustical behaviour of a room has a huge effect on how a loudspeaker sounds (I talked about this a lot in this posting). So does the specific placement of the loudspeakers and the listening position within a room. (I talked about this a lot in this posting). So, this also means that a pair of loudspeakers in a shop’s showroom will NOT sound the same as the same loudspeakers in your house – not even if you’ve aligned the listening levels and you’re playing the same recording. Maybe you have a strong sidewall reflection in your living room that they didn’t have in the showroom. Maybe the showroom is smaller than your living room, so the room modes are at higher frequencies and “helping out” the upper bass  instead of the lower bass. Maybe, in the showroom, the loudspeakers were quite far from the wall behind them, but in your house, you’re going to push the loudspeakers up against the wall. Any of these differences will have massive effects on the sound at the listening position.

Of course, there is only one way around this problem. If you’re buying a pair of loudspeakers, then you should talk to the salesperson about taking a demo pair home for a week or so – so that you can hear how they sound in your room. If you’re buying some other component in the audio chain that might have an impact on the sound, you should ask to take it home and try it out with your loudspeakers.

If you were buying a car, you would take it for a test drive – and you would probably get out of the parking lot of the car dealer when you did so. You have to take it out on the road to see how it feels. The same is true for audio equipment – if you can’t take it home to try it out, make sure that the shop has a good return policy. Just because it sounds good in the shop doesn’t mean that it’s going to sound good in your living room.

 

 

Tip #4: Personal Taste

I like single-malt scotch. Personally, I really like peaty, smoky scotch – other people like other kinds of scotch. There’s a good book by Michael Jackson (no, not that Michael Jackson – another Michael Jackson) that rates scotches. Personally, this is a good book for me, because, not only does he give a little background for each of the distilleries, and a description of the taste of each of the scotches in there – but he scores them according to his own personal ranking system. Luckily for me, Michael Jackson and I share a lot of the same preferences – so if he likes a scotch, chances are that I will too. So, his ranking scores are a pretty good indicator for me. However, if he and I had different preferences, then his ranking system would be pretty useless.

One of my favourite quotations is from Duke Ellington who said “If it sounds good, it is good.” I firmly believe that this is true. If you like the sound of a pair of loudspeakers, then they’re good for you. Any measurement or review in a magazine that disagrees is wrong. Of course, a measurement or a reviewer might be able to point you to something that you haven’t noticed about your loudspeakers (which may make you like them a little more or a little less…) but if you like the way they sound, then there’s no need to apologise for that.

However, remember that, when you read a review, that you are reading the words of someone who also has personal taste. Yes, he/she may have heard many more loudspeakers than you have in many more listening rooms – but they still have personal preference. And, part of that personal preference is a ranking of the categories in which a loudspeaker should perform. Personally, I divide an audio system’s (or a recording’s) qualities into 5 broad categories: 1. Timbral (tone colour), 2. Spatial (i.e. imaging and spaciousness), 3. Temporal (i.e. punch, transient response), 4. Dynamics (not just total dynamic range, but also things like short term “dynamic contrast”) and 5. Distortion & Noise. Each of these has sub-headings in my head – but the relative importance of these 5 qualities are really an issue of my personal preference (and my expectations for a given product – nobody expects an iThing dock to deliver good imaging, for example…). If your personal preference of the weighting of these 5 categories (assuming that you agree with my 5 categories) is different from mine, then we’re going to like different audio systems. And that’s okay. No problem – apart from the minor issue that, if I were a reviewer working for an audio magazine, you shouldn’t buy anything I recommend. I like sushi – you like steak – so if I recommend a good restaurant, you should probably eat somewhere else.

Of course, the fact that I will listen to different music played at a different level in a different listening room than you will might also have an effect on the difference between our opinions.

 

Tip #5: Close your eyes

This one is a no-brainer for people who do “real” listening tests for scientific research – but it still seems to be a mystery to people who review audio gear for a living. If you want to make a fair comparison between two pieces of audio gear, you cannot, under any circumstances, know what it is that you’re comparing. There was a perfect proof of this done by Kristina Busenitz at an Audio Engineering Society convention one year. Throughout the convention, participants were invited to do a listening test on two comparable automotive audio systems. Both were installed in identical cars, parked side-by-side. The two cars were aligned to have identical reproduction levels and you listened to exactly the same materials to answer exactly the same judgements about the systems. You had to sit in the same seat (i.e. Front Passenger side) for both tests, and you had to do the two evaluations back to back. One car had a branded system in it, the other was unbranded – made obvious by the posters hanging on the wall next to one of the cars. The cars were evaluated by lots of people over the 3 or 4 days of the convention. At the end, the results were processed and it was easily proven that the branded system was judged by a vast majority of the participants to be better than the unbranded system.

There was just one catch – every couple of hours, the staff running the test would swap the posters to the opposite wall. The two cars were actually identical. The only difference was the posters that hung outside them.

So, the vast majority of professional audio engineers agreed, in a completely “fair” test, that the car with the posters (which was the opposite car every couple of hours) sounded better than the one that didn’t.

Of course, what Kristina proved was that your eyes have a bigger effect on your opinion than your ears. If you see more expensive loudspeakers, they’ll probably sound better. This is why, when we’re running listening tests internally at Bang & Olufsen, we hide the loudspeakers behind an acoustically transparent, but visually opaque curtain. We can’t help but be influenced by our pre-formed opinions of products. We’ve even seen that a packing box for a (competitor’s) loudspeaker sitting outside the listening room will influence the results of a blind listening test on a loudspeaker that has nothing to do with the label on the box. (the box was a plant – just to see what would happen).

Tip #6: Are you sure?

One last thing that really should go without saying: If you’re doing a back-to-back comparison of two different aspects of an audio system, be absolutely sure that you’re only comparing what you think you’re comparing. For example, I’ve read of people who do comparisons of things like the following:

  • sending a “bitstream” vs. “PCM” from a Blu-ray or DVD player to an AVR/Surround Processor
  • PCM vs. DSD
  • “normal” resolution vs. “high-resolution” recordings

If you’re making such a comparison, and you plan on making some conclusions, be absolutely sure that the only thing that changing in your comparison is what you think you’re comparing. In the three examples I gave above, there are potentially lots of other things changing in your signal path in addition to the thing your changing. If you’re not absolutely sure that you’re only changing one thing, then you can’t be sure that the reason you might hear a difference in the things you’re comparing is due to the thing you’re comparing. (did that make sense?) For example, given the three above examples:

  • some AVR’s apply different processing to bitstream vs. PCM signals. Some use the metadata in  a bitstream, and some players don’t when they convert to PCM. So, the REASON the bitstream and the PCM signals might sound different is not necessarily because of the signals themselves, but how the gear treats them. (see this posting for more information on this)
  • Some DAC’s (meaning the chip on the circuit board inside the box that you have in your gear) apply different filters to a DSD signal than a PCM signal. Some have a different gain on the DSD signal (some “high resolution” software-based players also apply different gains to DSD and PCM signals). So, don’t just switch from DSD to PCM and think that, because you can hear a difference, that difference is the difference in DSD and PCM. It might just be your Equal Loudness Contours playing tricks on you.
  • Some DAC’s (see previous point for my current definition of “DAC”) apply different filters to signals at different sampling rates. Don’t judge two recordings you bought at different sampling rates and think that the only difference is the sampling rate. The gear that you’re using to play the files might behave differently at different rates.

And so on.

A good analogy to this is to go to a coffee shop and buy two cups of coffee – one medium roast and one dark roast. While you’re not looking, I’ll add a little sugar to the dark roast cup – and I’ll bribe the person that made your coffee to make the medium one a couple of degrees colder than the dark one. You taste both, and you decide that dark roast is better than medium roast. But is your decision valid? No. Maybe you like sugar in your coffee. Maybe you prefer hotter coffee.  Be careful how you make up your mind…

 

Summary

So, to wrap up, there are (at least) four things to remember when you’re shopping for audio gear:

  1. If you’re comparing systems, make sure that you’re listening at the same level.
  2. Always listen to a system using a recording with which you’re familiar – even if it’s a bad recording. Better something you know than something you don’t.
  3. Evaluate a system that you’re planning on buying in your own listening room.
  4. Don’t let anyone tell you what sounds good or bad. Ask them what they are listening to and for in a recording or a sound system – but decide for yourself what you like.
  5. If the listening test isn’t blind, it’s not worth much. Don’t even trust your own ears if you know what you’re listening to – your ears are easily fooled by your eyes and your pre-conceived notions. And you’re not alone.
  6. Be very sure that if you’re comparing two things, then the things you think you’re comparing are the only things that you’re comparing.

 

High-res audio codes: What’s what?

Back in the “old days”, people used to take a look at a three-letter code on CD packaging that indicated the domain used for the Recording, Mastering, and Distribution media. Usually, you saw things like “DDD” (meaning “Digital, Digital, Digital”) or “ADD” (for an Analogue recording that was mastered and distributed in the Digital domain).

Nowadays, there’s plenty of discussion about “high-resolution” audio – but one of things that nobody has seemed to agree on is exactly what is “high” and what is “normal” resolution (although I, personally, would also include George Massenburg’s call for a “Vile-Resolution” classification as well).

Well, finally, important people have gotten together to agree on how high is enough to be called “high”  – and how to tell consumers about it. The details can be found here: Link.

Some details from that page are below

The descriptors for the Master Quality Recording categories are as follows:

MQ-P
From a PCM master source 48 kHz/20 bit or higher; (typically 96/24 or 192/24 content)

MQ-A
From an analog master source

MQ-C
From a CD master source (44.1 kHz/16 bit content)

MQ-D
From a DSD/DSF master source (typically 2.8 or 5.6 MHz content)

B&O Tech: What is “Loudness”?

#21 in a series of articles about the technology behind Bang & Olufsen loudspeakers

Part 1: Equal Loudness Contours

Let’s start with some depressing news: You can’t trust your ears. Sorry, but none of us can.

There are lots of reasons for this, and the statement is actually far more wide-reaching than any of us would like to admit. However, in this article, we’re going to look at one small aspect of the statement, and what we might be able to do to get around the problem.

We’ll begin with a thought experiment (although, for some of you, this may be an experiment that you have actually done). Imagine that you go into the quietest room that you’ve ever been in, and you are given a button to press and a pair of headphones to put on. Then you sit and wait for a while until you calm down and your ears settle in to the silence… While that’s happening you read the instructions of the task with which you are presented:

Whenever you hear a tone in the headphones in either one of your ears, please press the button.

Simple! Hear a beep, press the button. What could be more difficult to do than that?

Then, the test begins: you hear a beep in your left ear and you press the button. You hear another, quieter beep and you press the button again. You hear an even quieter beep and you press the button. You hear nothing, and you don’t press the button. You hear a beep and you press the button. Then you hear a beep at a lower frequency and so on and so on. This goes on and on at different levels, at different frequencies, in your two ears, until someone comes in the room and says “thank you, that will be all”.

While this test seems like it would be pretty easy to do, it’s a little unnerving. This is because the room that you’re sitting in is so quiet and the beeps are also so quiet that, sometimes you think you hear a beep – but you’re not sure, because things like the sound of your heartbeat, and your breathing, and the “swooshing” of blood in your body, and that faint ringing in your ears, and the noise you made by shifting in your chair are all, relatively speaking VERY loud compared to the beeps that you’re trying to detect.

Anyways, when you’re done, you’ll might be presented with a graph that shows something called your “threshold of hearing”. This is a map of how loud a particular frequency has to be in order for you to hear it. The first thing that you’ll notice is that you are less sensitive to some frequencies than others. Specifically, a very low frequency or a very high frequency has to be much louder for you to hear it than if you’re listening to a mid-range frequency. (There are evolutionary reasons for this that we’ll discuss at the end.) Take a look at the bottom curve on Figure 1, below:

The threshold of hearing (bottom curve) and the Equal Loudness contours for 70 phons (red curve) and 90 phons (top curve) according to ISO226.
Fig 1: The threshold of hearing (bottom curve) and the Equal Loudness contours for 70 phons (red curve) and 90 phons (top curve) according to ISO226.

The bottom curve on this plot shows a typical result for a threshold of hearing test for a person with average hearing and no serious impairments or temporary issues (like wax build-up in the ear canal).  What you can see there is that, for a 1 kHz tone, your threshold of hearing is 0 dB SPL (in fact, this is how 0 dB SPL is defined…) As you go lower in frequency from there, you will have to turn up the actual signal level just in order for you to hear it. So, for example, you would need to have approximately 60 dB SPL at 30 Hz in order to be able to detect that something is coming out of your headphones or loudspeakers. Similarly, you would need something like 10 dB SPL at 10 kHz in order to hear it. However, at 3.5 kHz, you can hear tones that are quieter than 0 dB SPL! It stands to reason, then, that a 30 Hz tone at 60 dB SPL and a 1 kHz tone at 0 dB SPL and a 3.5 kHz tone at about -10 dB SPL and a 10 kHz tone at about 10 dB SPL would all appear to have the same loudness level (since they are all just audible).

Let’s now re-do the test, but we’ll change the instructions slightly. I’ll give you a volume knob instead of a button and I’ll play two tones at different frequencies. The volume knob only changes the level of one of the two tones, and your task is to make the two tones the same apparent level. If you do this over and over for different frequencies, and you plot the results, you might wind up with something like the red or the top curves in Fig 1. These are called “Equal Loudness Contours” (some people call them “Fletcher-Munson Curves because the first two researchers to talk about them were Fletcher and Munson) because they show how loud different frequencies have to be in order for you to think that they have the same loudness. So, (looking at the red curve) a 40 Hz tone at 100 dB SPL sounds like it’s the same loudness as a 1 kHz tone at 70 dB SPL or a 7.5 kHz tone at 80 dB SPL. The loudness level that you think you’re hearing is measured in “phons” – and the phon value of the curve is its value in dB SPL at 1 kHz. For example, the red curve crosses the 1 kHz line at 70 dB SPL, so it’s   the “70 phon” curve. Any tone that has an actual level in dB SPL that corresponds to a point on that red line will have an apparent loudness of 70 phons. The top curve is for the 90 phons.

Figure 2 shows the Equal Loudness Contours from 0 phons (the Threshold of Hearing) to 90 phons in steps of 10 phons.

Fig 2: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phone increments, according to ISO226.
Fig 2: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phon increments, according to ISO226.

There are two important things to notice about these curves. The first is that they are not “flat”. In other words, your ears do not have a flat frequency response. In fact, if you were measured the same way we measure microphones or loudspeakers, you’d have a frequency response specification that looked something like “20 Hz – 15 kHz ±30 dB” or so… This isn’t something to worry about, because we all have the same problem. So, this means that the orchestra conductor asked the bass section to play louder because he’s bad at hearing low frequencies, and the recording engineer balancing the recording adjusted the bass-to-midrange-to-treble relative levels using his bad hearing, and, assuming that the recording system and your playback system are reasonably flat-ish, then hopefully, your hearing is identically bad to the conductor and recording engineer, so you hear what they want you to.

However, I said that there are two things to notice – that was just the first thing. The second thing is that the curves are different at different levels. For example, if you look at the 0 phon curve (the bottom one) you’ll see that it raises a lot more in the low frequency region than, say, the 90 phon curve (the top one) relative to their mid-range values. This means that, the quieter the signal, the worse your ability to hear bass (and treble). For example, let’s take the curves and assume that the 70 phon line is our reference – so we’ll make that one flat, and adjust all of the others accordingly and plot them so we can see their difference. That’s shown in Figure 3.

Fig 3: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phone increments, according to ISO226. These have all been normalised to the 70 phone curve and subsequently inverted.
Fig 3: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phon increments, according to ISO226. These have all been normalised to the 70 phon curve and subsequently inverted.

What does Figure 3 show us, exactly? Well, one way to think of it is to go back to our “recording engineer vs. you” example. Let’s say that the recording engineer that did the recording set the volume knob in the recording studio so that (s)he was hearing the orchestra with a loudness at the 70 phon line. On other words, if the orchestra was playing a 1 kHz sine tone, then the level of the signal was 70 dB SPL at the listening position – and all other frequencies were balanced by the conductor and the engineer to appear to sound the same level as that. Then you take the recording home and set the volume so that you’re hearing things at the 30 phon level (because you’re having a dinner party and you want to hear the conversation more than you want to hear Beethoven or Justin Bieber, depending on your taste or lack thereof). Look at the curve that intersects the -40 dB line at 1 kHz (the 4th one from the bottom) in Figure 3. This shows you your sensitivity difference relative to the recording engineer’s in this example. The curve slopes downwards – meaning that you can’t hear bass as well – so, your recording playing in the background will appear to have a lot less bass and a little less treble than what the recording engineer heard – just because you turned down the volume. (Of course, this may be a good thing, since you’re having dinner and you probably don’t want to be distracted from the conversation by thumpy bass and sparkly high frequencies.)

Part 2: Compensation

In order to counter-act this “misbehaviour” in your hearing, we have to change the balance of the frequency bands in the opposite direction to what your ears are doing. So if we just take the curves in Figure 3 and flip each of them upside down, you have a “perfect” correction curve showing that, when you turn down the volume by, say 40 dB (hint: look at the value at 1 kHz) then you’ll need to turn up the low end by lots to compensate and make the overall balance sound the same.

Fig 3: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phon increments, according to ISO226. These have all been normalised to the 70 phon curve.
Fig 4: The Equal Loudness contours for 0 phons (bottom curve) to 90 phons (top curve) in 10 phon increments, according to ISO226. These have all been normalised to the 70 phon curve.

Of course, these curves shown in Figure 4 are normalised to one specific curve – in this case, the 70 phon curve. So, if your recording engineer was monitoring at another level (say, 80 phons) then your “perfect” correction curves will be wrong.

And, since there’s no telling (at least with music recordings) what level the recording and mastering engineers used to make the recording that you’re listening to right now (or the one you’ll hear after this one), then there’s no way of predicting what curve you should use to  do the correction for your volume setting.

All we can really say is that, generally, if you turn down the volume, you’ll have to turn up the bass and treble to compensate. The more you turn down the volume, the more you’ll have to compensate. However, the EXACT amount by which you should compensate is unknown, since you don’t know anything about the playback (or monitoring) levels when the recording was done. (This isn’t the same for movies, since re-recording engineers are supposed to work at a fixed monitoring level which should be the same as all the cinemas in the world… in theory…)

This compensation is called “loudness” – although in some cases it would be better termed “auto-loudness”. In the old days, a “loudness” switch was one that, when engaged, increased the bass and treble levels for quiet listening. (Of course, what most people did was hit the “loudness”switch and left it on forever.) Nowadays, however, this is usually automatically applied and has different amounts of boost for different volume settings (hence the “auto-” in “auto-loudness”). For example, if you look at Figure 5 you’ll see the various amounts of boost applied to the signal at different volume settings of the BeoPlay V1 / BeoVision 11 / BeoSystem 4 / BeoVision Avant when the default settings have not been changed. The lower the volume setting, the higher the boost.

Fig 5: The equalisation applied by the "Loudness" function at different volume settings in the BeoPlay V1, BeoVision 11, BeoSystem 3 and BeoVision Avant. Note that these are the default settings and are customisable by the user.
Fig 5: The equalisation applied by the “Loudness” function at different volume settings in the BeoPlay V1, BeoVision 11, BeoSystem 3 and BeoVision Avant. Note that these are the default settings and are customisable by the user.

Of course, in a perfect world, the system would know exactly what the monitoring levels was when they did the recording, and the auto-loudness equalisation would change dynamically from recording to recording. However, until there is meta-data included in the recording itself that can tell the system information like that, then there will be no way of knowing how much to add (or subtract).

Historical Note

I mentioned above that the extra sensitivity we have in the 3 kHz region is there due to evolution. In fact, it’s a natural boost applied to the signal hitting your eardrum as a result of the resonance of the ear canal. We have this boost (I guess, more accurately, we have this ear canal) because, if you snap a twig or step on some dry leaves, the noise that you hear is roughly in that frequency region. So, once-upon-a-time, when our ancestors were something else’s lunch, the ones with the ear canals and the resulting mid-frequency boost were more sensitive to the noise of a sabre-toothed tiger trying to sneak up behind them, stepping on a leaf, and had a little extra head start when they were running away. (It’s like the T-shirt that you can buy when you’re visiting Banff, Alberta says: “I don’t need to run faster than the bear. I just need to run faster than you.”)

As an interesting side note to this: the end result of this is that our language has evolved to use this sensitive area. The consonants in our speech – the “s” and”t” sounds, for example, sit right in that sensitive region to make ourselves easiest to understand.

Warning note

You might come across some youtube video or a downloadable file that let’s you “check your hearing” using a swept sine wave. Don’t bother wasting your time with this. Unless the headphones that you’re using (and everything else in the playback chain) are VERY carefully calibrated, then you can’t trust anything about such a demonstration. So don’t bother.

Warning note #2 – Post script…

I just saw on another website here that someone named John Duncan made the following comment about what I wrote in this article. “Having read it a couple of times now, tbh it feels like it is saying something important, I’m just not quite sure what. Is it that a reference volume is the most important thing in assessing hifi?” The answer to this is “Exactly!” If you compare two sound systems (say, two different loudspeakers, or two different DAC’s or two different amplifiers and so on… The moral of the stuff I talk about above is that, not only in such a comparison do you have to make sure that you only change one thing in the system (for example, don’t compare two DAC’s using a different pair of loudspeakers connected to each one) you absolutely must ensure that the two things you’re comparing are EXACTLY the same listening level. A different of 1 dB will have an effect on your “frequency response” and make the two things sound like they have different timbral balances – even when they don’t.

For example, when I’m tuning a new loudspeaker at work, I always work at the same fixed listening level. (for me, this is two channels of -20 dB FS full-band uncorrelated pink noise produces 70 dB SPL, C-weighted at the listening position). Before I start tuning, I set the level to match this so that I don’t get deceived by my own ears. If I tuned loudspeakers quieter than this, I would push up the bass to compensate. If I tuned louder, then I would reduce the bass. This gives me some consistency in my work. Of course, I check to see how the loudspeakers sound at other listening levels, but, when I’m tuning, it’s always at the same level.

High-Resolution Audio: More is not necessarily better…

I’ve been collecting some so-called “high-resolution” audio files over the past year or two (not including my good ol’ SACD’s and DVD-Audio’s that I bought back around the turn of the century… Or my old 1/4″, half-track, 30 ips tapes that I have left over from the past century. (Please do not add a comment at the bottom about vinyl… I’m not in the mood for a fight today.) Now, let’s get things straight at the outset. “High Resolution” means many things to many people. Some people say that it means “sampling rates above 44.1 kHz”. Other people say that it means “sampling rates at 88.2 kHz or higher”. Some people will say that it means 24 bits instead of 16, and sampling rate arguments are for weenies. Other people say that if it’s more than one bit, it ain’t worth playing. And so on and so on. For the purposes of this posting, let’s say that “high resolution” is a blanket marketing term that is used by people these days when they’re selling an audio file that you can download that is has a bit rate that is higher than 44.1 kHz / 16 bits or 1378.125 kbps. (You can calculate this yourself as follows: 44100 samples per second * 16 bits per sample * 2 channels / 1024 bits in a kilobit = 1378.125) I’ll also go on record (ha ha…) as saying that I would rather listen to a good recording of a good tune played by good musicians recorded at 44.1 kHz / 16 bit (or even worse!) than a bad recording (whatever that means) of a boring tune performed poorly by musicians that are encumbered neither by talent nor the interest to rehearse (or any recording that used an auto-tuner). All of that being said, I will also say that I am skeptical when someone says that something is something when they could get away with it being nothing. So, I like to check once-and-a-while to see if I’m getting what I was sold. So, I thought I might take some of my legally-acquired LPCM “high-resolution audio” files and do a quick analysis of their spectral content, just to see what’s there. In order to do this, I wrote a little MATLAB script that

  • loads one channel of my audio file
  • takes a block of 2^18 samples multiplied by a Blackman-Harris function and does an 2^18-point FFT on it
  • moves ahead 2^18 samples and repeats the previous step over and over until it gets to the end of the recording (no overlapping… but this isn’t really important for what I’m doing here…)
  • looks through all of the FFT results and take the maximum value of all FFT results for each FFT bin (think of it as a peak monitor with an infinite hold function on each frequency bin)
  • I plot the final result

So, the graphs below are the result of that process for some different tunes that I selected from my collection.

Track #1

Track 1 (an 88.2/24 file) is plotted first. Not much to tell here. You can see that, starting at about 1 kHz or so, the amplitude of the signals starts falling off.  This is not surprising. If it did not do that, then we would use white noise instead of pink noise to give us a rough representation of the spectrum of music. You may notice that the levels seem quite low – the maximum level on the plot being about -40 dB FS but keep in mind that this is (partly) because, at no point in the tune, was there a sine wave that had a higher level than that. It does not mean that the peak level in the tune was -40 dB FS.

Track 1: Full spectrum
Track 1: Full spectrum

The second plot of the same tune just shows the details in the top 2 octaves of the recording. Since this is a 88.2 kHz file, then this means we’re looking at the spectrum from 11025 Hz to 44100 Hz. I’ve plotted this spectrum on a linear frequency scale so that it’s easier to see some of the details in the top end. This isn’t so important for this tune, but it will come in handy below…

Track 1: Top 2 octaves
Track 1: Top 2 octaves

Track #2

The full-bandwidth plot for Track #2 (another 94/24 file) is shown below.

Track 2: Full bandwidth
Track 2: Full bandwidth

This one is interesting if you take a look up at the very high end of the plot – shown in detail in the figure below.

Track 2: Top 2 octaves
Track 2: Top 2 octaves

Here, you can see a couple of things. Firstly,  you can see that there is a rise in the noise from about 35 kHz up to about 45 kHz. This is possibly (maybe even probably) the result of some kind of noise shaping applied to the signal, which is not necessarily a bad thing, unless you have equipment that has intermodulation distortion issues in the high end that would cause energy around that region to fold back down. However, since that stuff is at least 80 dB below maximum, I certainly won’t lose any sleep over it. Secondly, you can see that there is a very steep low pass filter (probably an anti-aliasing filter) that causes the signal to drop off above about 45 kHz. Note that the boost in the energy just before the steep roll-off might be the result of a peak in the low pass filter’s response – but I doubt it. It’s more a “maybe” than a “probably”. You may also have some questions about why the noise floor above about 46 kHz seems to flatten out at about -190 dB FS. This is probably not due to content in the recording itself. This is likely “spectral leakage” from the windowing that comes along with making an FFT. I’ll talk a little about this at the end of this article.

Track #3

The third track on my hit list (another 94/24 file) is interesting…

Track 3: Full spectrum
Track 3: Full bandwidth

Take a look at the spike there around 20 kHz… What the heck are they doing there!? Let’s take a look at the zoom (shown below) to see if it makes more sense.

Track 3: Top 2 octaves
Track 3: Top 2 octaves

Okay, so zooming in more didn’t help – all we know is that there is something in this recording that is singing along at about 20 kHz at least for part of the recording (remember I’m plotting the highest value found for each FFT bin…). If you’re wondering what it might be, I asked a bunch of smart friends, and the best explanation we can come up with is that it’s noise from a switched-mode power supply that is somehow bleeding into the recording. HOW it’s bleeding into the recording is a potentially interesting question for recording engineers. One possibility is that one of the musicians was charging up a phone in the room where the microphones were – and the mic’s just picked up the noise. Another possibility is that the power supply noise is bleeding electrically into the recording chain – maybe it’s a computer power supply or the sound card and the manufacturer hasn’t thought about isolating this high frequency noise from the audio path. Or, maybe it’s something else.

Track #4

This last track is also sold as a 48 kHz, 24 bit recording. The total spectrum is shown below.

Track X: Full bandwidth
Track 4: Full bandwidth

This one is particularly interesting if we zoom in on the top end…

Track 4: Top 2 octaves
Track 4: Top 2 octaves

This one has an interesting change in slope as we near the top end. As you go up, you can see the knee of a low-pass filter around 20 kHz, and a second on around 23 kHz. This could be explained a couple of different ways, but one possible explanation is that it was originally a 44.1 kHz recording that was sample-rate converted to 48 kHz and sold as a higher-resolution file. The lower low-pass could be the anti-aliasing filter of the original 44.1 kHz recording. When the tune was converted to 48 kHz (assuming that it was…) there was some error (either noise or distortion) generated by the conversion process. This also had to be low-pass filtered by a second anti-aliasing filter for the new sampling rate. Of course, that’s just a guess – it might be the result of something totally different.

So what?

So what did I learn? Well, as you can see in the four examples above, just because a track is sold under the banner of “high-resolution”, it doesn’t necessarily mean that it’s better than a “normal resolution”recording. This could be because the higher resolution doesn’t actually give you more content or because it gives you content that you don’t necessarily want. Then again, it might mean that you get a nice, clean, recording that has the resolution you paid for, as in the first track. It seems that there is a bit of a gamble involved here, unfortunately. I guess that the phrase “don’t judge a book by its cover” could be updated to be “don’t judge a recording by its resolution” but it doesn’t really roll off the tongue quite so nicely, does it?

P.S.

Please do not bother asking what these four tracks are or where I bought them. I’m not telling. I’m not doing any of this to “out” anyone – I’m just saying “buyer beware”.

P.P.S

Please do not use this article as proof that high resolution recordings are a load of hooey that aren’t worth the money. That’s not what I’m trying to prove here. I’m just trying to prove that things are not always as they are advertised – but sometimes they are. Whether or not high res audio files are worth the money when they ARE the real McCoy is up to you.

Appendix

I mentioned some things above about “spectral leakage” and FFT windowing and a Blackman Harris function. Let’s do a quick run-through of what this stuff means without getting into too many details. When you do an FFT (a Fast Fourier Transform – but more correctly called a DFT or Discrete Fourier Transform in our case – but now I’m getting picky), you’re doing some math to convert a signal (like an audio recording) in the time domain into the frequency domain. For example, in the time domain, a sine wave will look like a wave, since it goes up and down in time. In the frequency domain, a sine wave will look like a single spike, because it contains only one frequency and no others. So, in a perfect world, an FFT would tell us what frequencies are contained in an audio recording. Luckily, it actually does this pretty well, but it has limitations. An FFT applied to an audio signal has a fixed number of outputs, each one corresponding to a certain frequency. The longer the FFT that you do, the more resolution you have on the frequencies (in other words, the “frequency bins” or “frequency centres” are closer together). If the signal that you were analysing only contained frequencies that were exactly the same as the frequency bins that the FFT was reporting on, then it would tell you exactly what was in the signal – limited only by the resolution of your calculator. However, if the signal contains frequencies that are different from the FFT’s frequency bins, then the energy in the signal “leaks” into the adjacent bins. This makes it look like there is a signal with a different frequency than actually exists – but it’s just a side effect of the FFT process – it’s not really there. The amount that the energy leaks into other frequency bins can be minimised by shaping the audio signal in time with a “windowing function”. There are many of these functions with different names and equations. I happened to use the Blackman Harris function because it gives a good rejection of spectral artefacts that are far from the frequency centre, and because it produces relatively similar artefact levels regardless of whether your signal is on or off the frequency bin of the FFT. For more info on this, read this.

Spectral leakage of Blackman-Harris windowing function. 1000 Hz, Fs=2^18, FFT Window length = 2^18 samples. The black plot shows the magnitude response calculated using an FFT and a rectangular windowing function. The red curve is with a Blackman Harris function.
Spectral leakage of Blackman-Harris windowing function. 1000 Hz at 0 dB FS, Fs=2^16, FFT Window length = 2^16 samples. The black plot shows the magnitude response calculated using an FFT and a rectangular windowing function. The red curve is with a Blackman Harris function. Note that the spectral leakage caused by the Blackman Harris function “bleeds” energy into all other bins, resulting in apparently much higher values than in the case of the rectangular windowing function.

This is a detail showing the peak of the response of for the 1000 Hz tone analysis.
This is a detail showing the peak of the response of for the 1000 Hz tone analysis. Note that the apparent level of the tone windowed using the Blackman Harris function is about 9 dB lower than when it’s windowed with a rectangular function.

 

Spectral leakage of Blackman-Harris windowing function. 1000.5 Hz, Fs=2^18, FFT Window length = 2^18 samples. The black plot shows the magnitude response calculated using an FFT and a rectangular windowing function. The red curve is with a Blackman Harris function.
Spectral leakage of Blackman-Harris windowing function. 1000.5 Hz at 0 dB FS, Fs=2^16, FFT Window length = 2^16 samples. The black plot shows the magnitude response calculated using an FFT and a rectangular windowing function. The red curve is with a Blackman Harris function. Now, since the frequency of the signal does not fall exactly on an FFT bin, the Blackman Harris – windowed signal appears “cleaner” than the one windowed using a rectangular function.

 

This is a detail showing the peak of the response of for the 1000.5 Hz tone analysis.
This is a detail showing the peak of the response of for the 1000.5 Hz tone analysis.