B&O Tech: It’s lonely at the top

#36 in a series of articles about the technology behind Bang & Olufsen loudspeakers

 

“In all affairs it’s a healthy thing now and then to hang a question mark on the things you have long taken for granted.”

-Bertrand Russell

 

If you were to get your hands on Harry Potter’s invisibility cloak and you were to walk into the acoustics department at Bang & Olufsen and eavesdrop on conversations, you’d sometimes be amazed at some of the questions we ask each other. That’s particularly true when we’re working on a new concept, because, if the concept is new, then it’s also new for us. The good thing is that all of my colleagues (and I) are ready and willing at any time to ask what might be considered to be a stupid (or at least a very basic) question.

One of those questions that we asked each other recently seemed like a basic one – why do we always put the tweeter on the top? It seems like there are very few loudspeakers that don’t do this (of course, there are exceptions – take the BeoLab Penta, for example, which has the tweeter in the middle). However, more often than not, when we (and most other people) make a loudspeaker, we put the loudspeaker drivers in ascending order of frequency – woofers on the bottom, tweeters on the top. Maybe this is because woofers are heavier, and if you stand a BeoLab 5 on its head, it will fall over – but that’s not really the question we were asking…

The REAL question we were asking ourselves at the time was: if we were to build a multi-way loudspeaker – let’s say a 3-way, with a woofer, a midrange and a tweeter, and if the crossovers were such that the bulk of the “interesting” information (say, the vocal range) was coming from the midrange, then why would we not put the midrange at ear height and put the tweeter between it and the woofer? For example, imagine we made BeoLab 20 without an Acoustic Lens on top, would it be better to arrange the drivers like the version on the left of Figure 1 or the version on the right? Which one would sound better?

 

Figure 1: Two versions of a BeoLab 20-like loudspeaker with different driver arrangements.
Figure 1: Two versions of a BeoLab 20-like loudspeaker with different driver arrangements.

 

After answering that question, there’s a second question would follow closely behind: how close together do the drivers with adjacent frequency bands (i.e. the woofer and the midrange or the tweeter and the midrange) have to be in order for them to sound like one thing? Of course, these two questions are inter-related. If your midrange and tweeter are so far apart that they sound like different sound sources, then you would probably be more interested in where the voice was coming from than where the cymbals were coming from…

Of course, step one in answering the second question could be to calculate/simulate the response of the loudspeaker, based on distance between drivers, the crossover frequency (and therefore the wavelengths of the frequency band we’re interested in), the slopes of the crossover filters, and so on. It would also be pretty easy to make a prototype model out of MDF, put the loudspeaker drivers in there, do the vertical directivity measurements of the system in the Cube, and see how well the theory matches reality.

However, the question we were really interested in was “but how would it sound?” – just to get a rough idea before going any further. And I have to stress here that we were really talking about getting a rough idea. What I’m about to describe to you is a little undertaking that we put together in a day – just to find out what would happen. This was not a big, scientifically-valid experiment using a large number of subjects and intensive statistics to evaluate the results. It was a couple of guys having a chat over coffee one morning when one of them said “I wonder what would happen if we put the midrange on top…” and then, off they went to a listening room to find out.

One thing we have learned in doing both “quick-n-dirty” listening comparisons and “real, scientifically valid listening tests” is that the placement of a loudspeaker in a room, has a huge effect on how the loudspeaker sounds. So, when we’re comparing two loudspeakers, we try to put them as close together as possible. So, we tried different versions of this. In the first, we took two pairs of loudspeakers, and put the left loudspeakers in each pair side-by-side, with one pair upside down and the other right-side up, as shown in Figure 2.

 

Figure X: Another arrangement of the loudspeakers used in the first part of the experiment.
Figure 2: One arrangement of the loudspeakers used in the first part of the experiment. “Pair A” is in black and “Pair B” is in red.

 

We then switched between the right-side up pair and the upside down pair, listening for a change in vertical position of the image. Note that we tried two arrangements of this – one where both right-side up loudspeakers were to the left of the upside-down loudspeakers. The other where the “right-side up” loudspeakers were the “outside” pair, as shown in Figure 3.

Figure X: Another arrangement of the loudspeakers used in the first part of the experiment.
Figure 3: Another arrangement of the loudspeakers used in the first part of the experiment.

 

There are advantages and disadvantages of both of these arrangements – but in both cases, there is a lateral shift in the stereo image. When switching between pairs, either you get a left-right shift in image, or a change in width… It turned out that this change was more distracting than the vertical arrangement of the drivers, so we changed configuration to the one shown in Figure 4, below.

Figure X: Another arrangement of the loudspeakers used in the first part of the experiment.
Figure 4: Another arrangement of the loudspeakers used in the first part of the experiment. Note that, in this version, we used the loudspeaker pairs differently. “Pair A” used the red loudspeakers as tweeters and the black loudspeakers as woofers using an external crossover. “Pair B” used the red loudspeakers as woofers and the black loudspeakers as tweeters.

 

Now, instead of switching between loudspeakers, we pretended that one of them was a tweeter and other was a mid-woofer, and switched which was which, on the fly. Our “virtual” crossover was close-ish to the real crossover in the loudspeakers (our crossover was at 3.5 kHz, if you’re curious), so you could say that we were sort-of changing between using the upper tweeter + the lower woofer and using the upper woofer + lower tweeter, the “acoustical centres” of which are roughly at the same height. (remember – this was a quick-n-dirty experiment…)

 

Figure X: A photo of the loudspeakers used in the first part of the experiment.
Figure 5: A photo of one configuration of the  loudspeakers used in the first part of the experiment. Note that the BeoLab 9 and the BeoPlay V1 were not part of the experiment – we just didn’t bother to clean up before getting busy…

 

 

Figure X: A photo of the loudspeakers used in the first part of the experiment.
Figure 6: A photo of one configuration of the loudspeakers used in the first part of the experiment.

 

After having listened to these three configurations of loudspeakers, we decided that the vertical arrangement of the drivers was not important with the vertical separation we were using.

This brought us to the second part of the question… If the tweeter and the midrange were further apart, would we have a preference? So, we kept our virtual crossover, using one loudspeaker as the “tweeter” and the other as the “mid-woofer”, and we moved the loudspeakers further apart, one example of which is shown in Figure 7. (One thing to note here is that when I say “further apart” I’m talking about the separation in the vertical angles of the sources – not necessarily their actual distance from each other. For example, if the loudspeakers were 1 m apart vertically, and you were level with one of the loudspeakers, but the listening position was a kilometre away, then the vertical angular separation (0.057 degrees) would be much smaller than if you were 1 m away (45 degrees)…)

 

Figure X: A photo of the loudspeakers used in the second part of the experiment.
Figure 7: A photo of the loudspeakers used in the second part of the experiment. This was one (extreme) example of a vertical separation.

The answer we arrived at at the end was that, when the vertical separation of the drivers gets extreme (perhaps I should use the word “silly” instead), we preferred the configuration where the “mid-woofer” was at ear-height. However, this was basically a choice of which version we disliked less (“preference” is a loaded word…). When the drivers get far enough apart, of course, they no longer “hang together” as a single device without some extra signal processing tricks.

So, we went to lunch having made the decision that, as long as the tweeters and the midranges are close enough to each other vertically, we really didn’t have a preference as to which was on top, so, if anyone asked, we would let the visual designers decide. (Remember that “close enough” is not only determined by this little experiment – it is also determined by the wavelengths of the crossover frequencies and whether or not there are more drivers to consider. For example, in the example in Figure 1, it might be that we don’t care about the relative placement of the tweeter and midrange in isolation – but perhaps putting the tweeter between the midrange and woofer will make them too far apart to have a nice vertical directivity behaviour across the lower crossover…)

 

Addendum

This isn’t the first time we asked ourselves such a question. Although I was not yet working at B&O at the time, my colleagues tell me that, back in the days when they were developing the BeoLab 3500 and the BeoLab 7-1 (both of which are “stereo” loudspeakers – essentially two multi-way loudspeakers in a single device) , they questioned the driver arrangement as well. Should the tweeters or the midranges / woofers be on the outside? You can see in the photos of the 3500 on this page that they decided to put the lower-frequency bands wider because the overall impression with stereo signals was wider than if the tweeters were on the outside.

 

B&O Tech: Active Room Compensation

#35 in a series of articles about the technology behind Bang & Olufsen loudspeakers

 

Introduction:
Why do I need to compensate for my room?

Take a look at Figure 1. You’ll see a pair of headphones (BeoPlay H6‘s, if you’re curious…) sitting under a lamp that is lighting them directly. (That lamp is the only light source in the room. I can’t prove it, so you’ll have to trust me on this one…) You can see the headphones because the light is shining on them, right? Well… sort of.

Figure X: An object being lit with direct and reflected light.
Figure 1: An object being lit with direct and reflected light.

What happens if we put something between the lamp and the headphones? Take a look at Figure 2, which was taken with the same camera, the same lens, the same shutter speed, the same F-stop, and the same ISO (in other words, I’m not playing any tricks on you – I promise…).

Figure X: An object being lit with reflected light only.
Figure 2: An object being lit with reflected light only.

Notice that you can still see the headphones, even though there is no direct light shining on them. This probably does not come as a surprise, since there is a mirror next to them – so there is enough light bouncing off the mirror to reflect enough light back to the headphones so that we can still see them. In fact, there’s enough light from the mirror that we can see the shadow caused by the reflected lamp (which is also visible in Figure 1, if you’re paying attention…).

If you don’t believe me, look around the room you’re sitting in right now. You can probably see everything in it – even the things that do not have light shining directly on them (for example, the wall behind an open door, or the floor beneath your feet if you lift them a little…)

Exactly the same is true for sound. Let’s turn the lamp into a loudspeaker and the headphones on the floor into you, in the listening position and send a “click” sound (what we geeks call an “impulse”) out of the loudspeaker. What arrives at the listening position? This is illustrated in Figure 3, which is what we call an “impulse response” – how a room responds to an impulse (a click coming from a loudspeaker).

Figure X: The Impulse Response of a loudspeaker in a room at one location. The top plot shows the impulse (a "click" sound) sent to the loudspeaker. The bottom plot shows the sound received at the listening position.
Figure 3: The Impulse Response of a loudspeaker in a room at one location. The top plot shows the impulse (a “click” sound) sent to the loudspeaker. The bottom plot shows the sound received at the listening position.

 

 

The top plot in Figure 1 shows the signal that is sent to the input of the loudspeaker. The bottom plot is the signal at the input of the microphone placed at the listening room. If we zoom in on the bottom plot, the result is Figure 4. This makes it much easier to see the direct sound and the reflections as separate components.

Figure X: A zoom of the bottom plot in Figure X.
Figure 4: A zoom of the bottom plot in Figure 3.

If we zoom in even further to the beginning of the plot in Figure 4, we can see individual reflections in the room’s response, as is shown in Figure 5.

Figure X: A zoom of the plot in Figure X showing the direct sound and some of the reflections off of surfaces in the room. Note that the first reflection is only about 12 dB quieter than the direct sound.
Figure 5: A zoom of the plot in Figure 4 showing the direct sound and some of the reflections off of surfaces in the room. Note that the first reflection is only about 12 dB quieter than the direct sound.

 

Let’s take the total impulse response and separate the direct sound from the rest. This is shown in Figure 6.

 

Figure X: The impulse response of a loudspeaker in a room, separating the direct sound (in red) from the reflections (in blue).
Figure 6: The impulse response of a loudspeaker in a room, separating the direct sound (in red) from the reflections (in blue).

 

We can then calculate the magnitude responses of the two separate components individually to see what their relative contributions are – shown in Figure 7.

 

Figure X: The magnitude responses of the separate components of the total impulse response in Figure X. Red: Direct sound, Blue: Reflections.
Figure 7: The magnitude responses of the separate components of the total impulse response in Figure 6. Red: Direct sound, Blue: Reflections. 1/3 octave smoothed

 

Now, before you get carried away, I have to say up-front that this plot is a little misleading for many reasons – but I’ll only mention two…

The first is that it shows that the direct sound is quieter than the reflected sound in almost all frequency bands, but as you can see in Figure 6, the reflected energy is never actually louder than the direct sound. However, the reflected energy lasts for much longer than the direct sound, which is why the analysis “sees” it as containing more energy – but you don’t hear the decay in the room’s response at the same time when you play a click out of the loudspeaker. Then again, you usually don’t listen to a click – you listen to music, so you’re listening to the end of the room decay on the music that happened a second ago while you’re listening to the middle of the decay on the music that happened a half-second ago while you’re listening to the direct sound of the music that happened just now… So, at any given time, if you’re playing music (assuming that this music was constantly loud – like Metallica, for example…), you’re hearing a lot of energy from the room smearing music from the recent past, compared to the amount of energy in the direct sound which is the most recent thing to come out of the loudspeaker.

The second is in the apparent magnitude response of the direct sound. It appears from the red curve in Figure 7 that this loudspeaker has a response that lacks low frequency energy. This is not actually true – the loudspeaker that I used for this measurement actually has a flat on-axis magnitude response within about 1 dB from well below 20 Hz to well above 20 kHz. However, in order to see that actual response of the loudspeaker, I would have to use a much longer slice of time than the little red spike shown in Figure 6. In other words, the weirdness in the magnitude response is an artefact of the time-slicing of the impulse response. The details of this are complicated, so I won’t bother explaining it in this article – you’ll just have to trust me when I say that that isn’t really the actual response of the loudspeaker in free space…

The “punch line” for all of this is that the room has a significant influence on the perceived sound of the loudspeaker (something I talked about in more detail in this article). The more reflective the surfaces in the room, the more influence it has on the sound. (Also, the more omnidirectional the loudspeaker, the more energy it sends in more directions in the room, which also will mean that the room has more influence on the total sound at the listening position… but there’s more information about that in the article on Beam Width Control.)

So, if the room has a significant influence on the sound of the loudspeaker at the listening position, then it’s smart to want to do something about it. In a best case (and very generally speaking…), we would want to measure the effects that the room has on the overall sound of the loudspeaker and “undo” them. The problem is that we can’t actually undo them without changing the room itself. However, we can make some compensation for some aspects of the effects of the room. For example, one of the obvious things in the blue curve in Figure 7 is that the listening room I did the measurement in  has a nasty resonance in the low end (specifically, it’s at about 57 Hz which is the second axial mode for the depth of the room which is about 6 m). It would certainly help the overall sound of the loudspeaker to put in a notch filter at that frequency – in a best case, we should measure the phase response of the room’s resonance and insert a filter that has the opposite phase response. But that’s just the beginning with one mode – there are lots more things to fix…

 

A short history

Almost all Bang & Olufsen loudspeakers have a switch that allows you to change its magnitude response to compensate for the position of the loudspeaker in the room. This is typically called a Free/Wall/Corner switch, since it’s designed to offset the changes to the timbre of the loudspeaker caused by the closest boundaries. There’s a whole article about this effect and how we make a filter to compensate for it at this link.

In 2002, Bang & Olufsen took this a step further when it introduced the BeoLab 5 which included ABC – Automatic Bass Calibration. This was a system that uses a microphone to measure the effects of the listening room’s acoustical behaviour on the sound of the loudspeaker, and then creates a filter that compensates for those effects in the low frequency band. As a simple example, if your room tends to increase the apparent bass level, then the BeoLab 5’s reduce their bass level by the same amount. This system works very well, but it has some drawbacks. Specifically, ABC is designed to improve the response of the loudspeaker averaged over all locations in the room. However this follows the philosophy first stated Spock said in Star Trek II: The Wrath of Kahn when he said “the needs of the many outweigh the needs of the few, or the one.” In other words, in order to make the averaged response of the loudspeaker better in all locations in the room, it could be that the response at one location in the room (say, the “sweet spot” for example…) gets worse (whatever that might mean…). This philosophy behind ABC makes sense in BeoLab 5, since it is designed as a loudspeaker that has a wide horizontal directivity – meaning it is designed as a loudspeaker for “social” listening, not as a loudspeaker for someone with one chair and no friends… Therefore an improved average room response would “win” in importance over an improved sweet spot.

 

Active Room Compensation

We are currently working on a taking this concept to a new level with Active Room Compensation. Using an external microphone, we can measure the effects of the room’s acoustical behaviour in different zones in the room and subsequently optimise compensation filters for different situations. For example, in order to duplicate the behaviour of BeoLab 5’s ABC, we just need to use the microphone to measure a number of widely-space locations around the room, thus giving us a total average for the space. However, if we want to create a room compensation filter for a single location – the sweet spot, for example – then we can restrict the locations of the microphone measurements to that area within the room. If we want to have a compensation filter that is pretty good for the whole room, but has emphasis on the sweet spot, we just have to make more measurements in the sweet spot than in the rest of the room. The weighting of importance of different locations in the room can be determined by the number of microphone measurements we do in each location. Of course, this isn’s as simple a procedure as pressing one button, as in ABC on the BeoLab 5, but it has the advantage in the ability to create a compensation filter for a specific location instead of for the whole listening space.

As part of this work, we are developing a new concept in acoustical room compensation: multichannel processing. This means that the loudspeakers not only “see” each other as having an effect on the room – but they help each other to control the room’s acoustical influence. So, if you play music in the left loudspeaker only, then some sound will also come out of the right loudspeaker. This is because both the left and right loudspeakers are working together to control the room (which is being “activated” by sound only from the left loudspeaker.

 

B&O Tech: What is “Beam Width Control”?

#34 in a series of articles about the technology behind Bang & Olufsen loudspeakers

 

A little background:
Distance Perception in “Real Life”

Go to the middle of a snow-covered frozen lake with a loudspeaker, a chair, and a friend. Sit on the chair, close your eyes and get your friend to place the loudspeaker some distance from you. Keep your eyes closed, play some sounds out of the loudspeaker and try to estimate how far away it is. You will be wrong (unless you’re VERY lucky). Why? It’s because, in real life with real sources in real spaces, distance information (in other words, the information that tells you how far away a sound source is) comes mainly from the relationship between the direct sound and the early reflections that come at you horizontally. If you get the direct sound only, then you get no distance information. Add the early reflections and you can very easily tell how far away it is. If you’re interested in digging into this on a more geeky level, this report is a good starting point.

A little more background:
Distance perception in a recording

Recording engineers use this information as a trick to simulate differences in apparent distance to sound sources in a stereo recording by playing with the so-called “dry-wet” ratio – in other words, the relative levels of the direct sound and the reverb. I first learned this in the little booklet that came with my first piece of recording gear – an Alesis Microverb (1st generation… It was a while ago…). To be honest – this is a bit of an over-simplification, but it’s good enough to work (for example, listen to the reverberation on Grover’s voice change as he moves from “near” to “far” in this video). The people at another reverb unit manufacturer know that the truth requires a little more details. For example, their flagship reverb unit uses correctly-positioned and correctly-delayed early reflections to deliver a believable room size and sound source location in that room.

Recording Studios vs. Living Rooms

When a recording engineer makes a recording in a well-designed studio, he or she is sitting not only in a carefully-designed acoustical space, but a very special area within that space. In many recording studios, there is an area behind the mixing console where there are no (or at least almost no) reflections from the sidewalls . This is accomplished either by putting acoustically absorptive materials on the walls to soak up the sound so it cannot reflect (as shown in Figure 1), or to angle the walls so that the reflections are directed away from the listening position (as shown in Figure 2).

Figure 1: A typical floorplan for a recording studio that was built inside an existing room. The large rectangle is the recording console. The blue triangles are acoustically absorptive materials.
Figure 1: A typical floorplan for a recording studio that was built inside an existing room. The large rectangle is the recording console. The blue triangles are acoustically absorptive materials.

 

Figure 1: A typical floorplan for a recording studio that was designed for the purpose. The large rectangle is the recording console.
Figure 2: A typical floorplan for a recording studio that was designed for the purpose. The large rectangle is the recording console. Note that the side walls are angled to reflect energy away from the listening position.

Both of these are significantly different from what happens in a typical domestic listening room (in other words, your living room) where the walls on either side of the listening position are usually  acoustically reflective, as is shown in Figure 3.

Figure 3: A typical floorplan for a living room used as a listening room.
Figure 3: A typical floorplan for a living room used as a listening room.

 

In order to get the same acoustical behaviour at the listening position in your living room that the recording engineer had in the studio, we will have to reduce the amount of energy that is reflected off the side walls. If we do not want to change the room, one way to do this is to change the behaviour of the loudspeaker by focusing the beam of sound so that it stays directed at the listening position, but it sends less sound to the sides, towards the walls, as is shown in Figure 4.

Figure 4: A representation of a system using loudspeakers that send less energy towards the sidewalls.
Figure 4: A representation of a system using loudspeakers that send less energy towards the sidewalls. Note that there are still sidewall reflections – they’re just less noticeable.

So, if you could reduce the width of the beam of sound directed out the front of the loudspeaker to be narrower to reduce the level of sidewall reflections, you would get a more accurate representation of the sound the recording engineer heard when the recording was made. This is because, although you still have sidewalls that are reflective, there is less energy going towards them that will reflect to the listening position.

However, if you’re sharing your music with friends or family, depending on where people are sitting, the beam may be too narrow to ensure that everyone has the same experience. In this case, it may be desirable to make the loudspeaker’s sound beam wider. Of course, this can be extended to its extreme where the loudspeaker’s beam width is extended to radiate sound in all directions equally. This may be a good setting for cases where you have many people moving around the listening space, as may be the case at a party, for example.

For the past 5 or 6 years, we in the acoustics department at Bang & Olufsen have been working on a loudspeaker technology that allows us to change this radiation pattern using a system we call Beam Width Control. Using lots of DSP power, racks of amplifiers, and loudspeaker drivers, we are able to not only create the beam width that we want (or switch on-the-fly between different beam widths), but we can do so over a wide frequency range. This allows us to listen to the results, and design the directivity pattern of a loudspeaker, just as we currently design its timbral characteristics by sculpting its magnitude response. This means that we can not only decide how a loudspeaker “sounds” – but how it represents the spatial properties of the recording.

 

What does Beam Width Control do?

Let’s start by taking a simple recording – Susanne Vega singing “Tom’s Diner”. This is a song that consists only of a fairly dryly-recorded voice without any accompanying instruments. If you play this tune over “normal” multi-way loudspeakers, the distance to the voice can (depending on the specifics of the loudspeakers and the listening room’s reflective surfaces) sound a little odd.  As I discussed in more detail in this article, different beam widths (or, if you’re a little geeky – “differences in directivity”) at different frequency bands can cause artefacts like Vega’s “t’s” and “‘s’s” appearing to be closer to you than her vowel sounds, as I have tried to represent in Figure 5.

Figure X: A spatial map representing the location of the voice in Suzanne Vega's recording of Tom's Diner. Beam Width = off.
Figure 5: A spatial map representing the location of the voice in Suzanne Vega’s recording of Tom’s Diner. Beam Width Control = off. Note that the actual experience is that some frequency bands in her voice appear closer than others. This is due to the fact that the loudspeakers have different directivities at different frequencies.

 

Figure 6: The directivity of the system as a "normal" multi-way loudspeaker.
Figure 6: The directivity of the system as a “normal” multi-way loudspeaker. 3 dB per contour to -12 dB relative to on-axis.

If you then switch to a loudspeaker with a narrow beam width (such as that shown in the directivity plot in Figure 7 – the beam width is the vertical thickness of the shape in the plot – note that it’s wide in the low frequencies and narrowest at 10,000 Hz), you don’t get much energy reflected off the side walls of the listening room. You should also notice that the contour lines are almost parallel, which means that the same beam width doesn’t change as much with frequency.

Figure 7: The directivity of the system in "narrow" beam width.
Figure 7: The directivity of the system in “narrow” beam width. 3 dB per contour to -12 dB relative to on-axis.

Since there is very little reflected energy in the recording itself, the result is that the voice seems to float in space as a pinpoint, roughly half-way between the listening position and the loudspeakers – much as was the case of the sound of your friend on the snow-covered lake. In addition, as you can see in Figure 7, the beam width of the loudspeaker’s radiation is almost the same at all frequencies – which means that, not only does Vega’s voice float in a location between you and the loudspeakers, but all frequency bands of her voice appear to be the same distance from you. This is represented in Figure 8.

Figure X: A spatial map representing the location of the voice in Suzanne Vega's recording of Tom's Diner. Beam Width = narrow.
Figure 8: A spatial map representing the location of the voice in Suzanne Vega’s recording of Tom’s Diner. Beam Width = narrow.

If we then switch to a completely different beam width that sends sound in all directions, making a kind of omnidirectional loudspeaker (with a directivity response as is shown in Figure 9), then there are at least three significant changes in the perceived sound. (If you’re familiar with such plots, you’ll be able to see the “lobing” and diffraction caused by various things, including the hard corners on our MDF loudspeaker enclosures. See this article for more information about this little issue… )

Figure 8: The directivity of the system in "omni" beam width.
Figure 9: The directivity of the system in “omni” beam width. 3 dB per contour to -12 dB relative to on-axis.

The first big change is that the timbre of the voice is considerably different – particularly in the mid-range (although you could easily argue that this particular recording only has mid-range…). This is caused by the “addition” of reflections from the listening room’s walls at the listening position (since we’re now sending more energy towards the room boundaries). The second change is in the apparent distance to the voice. It now appears to be floating at a distance that is the same as the distance to the loudspeakers from the listening position. (In other words, she moved away from you…). The third change is in the apparent width of the phantom image – it becomes much wider and “fuzzier” – like a slightly wide cloud floating between the loudspeakers (instead of a pin-point location). The total result is represented in Figure 10, below.

Figure X: A spatial map representing the location of the voice in Suzanne Vega's recording of Tom's Diner. Beam Width = omni.
Figure 10: A spatial map representing the location of the voice in Suzanne Vega’s recording of Tom’s Diner. Beam Width = omni.

 

All three of these artefacts are the result of the increased energy from the wall reflections.

Of course, we don’t need to go from a very narrow to an omnidirectional beam width. We could find a “middle ground” – similar to the 180º beam width of BeoLab 5 and call that “wide”. The result of this is shown in Figures 11 and 12, with a measurement of the BeoLab 5’s directivity shown for comparison in Figure 13.

Figure 8: The directivity of the system in "wide" beam width.
Figure 11: The directivity of the system in “wide” beam width. 3 dB per contour to -12 dB relative to on-axis.

 

Figure X: A spatial map representing the location of the voice in Suzanne Vega's recording of Tom's Diner. Beam Width = wide.
Figure 12: A spatial map representing the location of the voice in Suzanne Vega’s recording of Tom’s Diner. Beam Width = wide.

 

Figure 8: The directivity of a BeoLab 5.
Figure 13: The directivity of a BeoLab 5.

If we do the same comparison using a more complex mix (say, Jennifer Warnes singing “Bird on a Wire” for example) the difference in the spatial representation is something like that which is shown in Figures 14 and 15. (Compare these to the map shown in this article.) Please note that these are merely an “artist’s rendition” of the effect and should not be taken as precise representations of the perceived spatial representation of the mixes. Actual results will certainly vary from listener to listener, room to room, and with changes in loudspeaker placement relative to room boundaries.

 

Figure X: A spatial map representing the locations of some of the sound sources in Jennifer Warnes's recording of Bird on a Wire. Beam Width = narrow.
Figure 14: A spatial map representing the locations of some of the sound sources in Jennifer Warnes’s recording of Bird on a Wire. Beam Width = narrow.

 

Figure X: A spatial map representing the locations of some of the sound sources in Jennifer Warnes's recording of Bird on a Wire. Beam Width = wide.
Figure 15: A spatial map representing the locations of some of the sound sources in Jennifer Warnes’s recording of Bird on a Wire. Beam Width = wide.

 

Of course, everything I’ve said above assumes that you’re sitting in the “sweet spot” – a location equidistant to the two loudspeakers at which both loudspeakers are aimed. If you’re not, then the perceived differences between the “narrow” and “omni” beam widths will be very different… This is because you’re sitting outside the narrow beam, so, for starters, the direct sound from the loudspeakers in omni mode will be louder than when they’re in narrow mode. In an extreme case, if you’re in “narrow” mode, with the loudspeaker pointing at the wall instead of the listening position, then the reflection will be louder than the direct sound – but now I’m getting pedantic.

 

Wrapping up…

The idea here is that we’re experimenting on building a loudspeaker that can deliver a narrow beam width so that, if you’re like me – the kind of person who has one chair and no friends, and you know what a “stereo sweet spot” is, then you can sit in that chair and hear the same spatial representation that the recording engineer heard in the recording studio (without having to make changes to your living room’s acoustical treatment). However, if you do happen to have some friends visiting, you have the option of switching over to a wider beam width so that everyone shares a more similar experience. It won’t sound as good (whatever that might mean to you…) in the sweet spot, but it might sound better if you’re somewhere else. Similarly, if you take that to an extreme and have a LOT of friends over, you can use the “omni” beam width and get a more even distribution of background music throughout the room.

 

For more information on Beam Width Control

Shark Fins and the birth of Beam Width Control

Beam Width Control – A Primer

 

Post-script

For an outsider’s view, please see the following…

Ny lydteknikk fra Bang & Olufsen” – Lyd & Bilde (Norway)

Stereophile magazine (October 2015, Print edition) did an article on their experiences hearing the prototypes as well.

BeoLab 90: B&O laver banebrydende højttaler” – Lyd & Bilde (Norway)

Speakers and Sneakers

I recently received an email from someone asking the following question:

“I’ve been reading your blog for a while and a question popped up. What do you think of the practice of ”breaking in” speakers? Is there any truth to it or is it simply just another one of the million myths believed by audiophiles?”

To answer this question, I’ll tell a story.

At work, we have a small collection of loudspeakers – not only current B&O models, but older ones as well. In addition, of course, we have a number of loudspeakers made by our competitors. Many loudspeakers in this collection don’t get used very often, but occasionally, we’ll bring out a pair to have a listen as a refresher or reminder. Usually, the way this works is that one of us from the acoustics department will sneak into the listening room with a pair of loudspeakers, and set them up behind an acoustically transparent, but visually opaque curtain. The rest of us then get together and listen to the same collection of recordings at the same listening level, each of sitting in the same chair. We talk about how things sound, and then we open the curtain to see what we’ve been complaining about.

One day, about three years ago, it was my turn to bring in the loudspeakers, so I set up a pair of passive loudspeakers (not B&O) that have a reasonably good reputation. We had a listen and everyone agreed that the sound was less than optimal (to be polite…). No bass, harsh upper midrange, everything sounded like it was weirdly compressed. Not many of us had anything nice to say. I opened the curtain, and everyone in the room was surprised when they saw what we had listened to – since we would have all expected things to sound much better.

Later that day, I spoke with one of our colleagues who was not in the room, and I told him the story – no one liked the sound, but those speakers should sound better. His advice was to wait until next week, and play the same loudspeakers again – but the next time, play pink noise through them at a reasonably high level for a couple of hours before we listened. So, the next week, the day before we were scheduled to have our listening session, I set up the same speakers in the same locations in the room, and played pink noise at about 70 dB SPL through them overnight. The next morning, we had our blind listening session, and everyone in the room agreed that the sound was quite good – much better than what we heard last week. I opened the curtains and everyone was surprised again to see that nothing had changed. Or had it? I was as surprised as anyone, since my religious belief precludes this story from being true. But I was there… it actually happened.

So, what’s the explanation? Simple! Go to the store and buy two identical pairs of sneakers (or “running shoes” or “trainers”, depending on where you’re from). When you get home, take one pair out of the box, and wear them daily. After three or four months, take the pair that you left in the box and try them on. They will NOT feel the same as the pair you’ve been wearing. This is not a surprise – the leather and plastic and rubber in the sneakers you’ve been wearing has been stretched and flexed and now fits your foot better than the ones you have not been wearing. In addition, you’ll probably notice that the “old” ones are more flexible in the places where your foot bends, because you’ve been bending them.

It turns out (according to the colleague who suggested the pink noise trick who also used to design and make loudspeaker drivers for a living) that the suspension (the surround and spider) of a loudspeaker driver becomes more flexible by repeated flexing – just like your sneakers. If you take a pair of loudspeakers out of the box, plug them in, and start listening, they’ll be stiff. You need to work them a little to “loosen them up”.

This is not only true of new sneakers (and speakers) but also of sneakers (and speakers). For example, I keep my old sneakers around to use when I’m mowing the lawn. When I stick my foot into the sneakers that I haven’t worn all winter, they feel stiff, like a new pair, because they have not been flexed for a while. This is what happened to those speakers that I brought upstairs after sitting in the basement storage for years. The suspensions became stiff and needed to be moved a little before using them for listening to music.

 

A small problem that compounds the complexity of evaluating this issue is that we also “get used to” how things sound. So, as you’re “breaking in” a loudspeaker by listening to it, you are also learning and accommodating yourself to how it sounds, so you’re both changing simultaneously. Unless you have the option of playing a trick on people like I did with my colleagues, it’s difficult to make a reliable judgement of how big a difference this makes.