Verifying headphone performance

MathWorks Australia
By Richard Hodges, principal scientist, Plantronics
Friday, 12 August, 2011


The consumer market for telephone headsets is noted for its innovative products and its fast pace. Companies produce products with new features almost weekly. In this environment, the market life of headsets is becoming very short; in fact, some products can have a life of only six months or so.

This puts severe downward pressure on our project development times. We have to produce new models before the competition, with a headset that offers features the competition doesn’t have.

Plantronics has developed a novel design platform to help us accelerate innovation, development and verification.

A consumer telephone headset combines several interacting components, each with its own very different behaviour. For example, effective noise cancellation depends on the interaction of the microphone, the earphones, electronic signal processing and people.

We rely on so-called ‘golden ears’ listeners to assess audio quality, so people are very much part of the headset development process.

To provide better audio quality and more features, we add more signal processing, which requires more powerful embedded hardware and software. This introduces compile-build-download delays into our development process.

Consider a test scenario in which a golden ears listener detects an audio issue, perhaps with adaptive gain. We correct the adaptive gain algorithm, recompile and build our software using an IDE on a PC.

We download the built software to our embedded hardware and start the testing process all over again. Each bug we detect results in another costly and time-consuming compile-build-download cycle.

In analysing this process, it is clear that it would be much more efficient to ‘tune’ the signal-processing algorithms while the call is in progress by altering algorithm parameters or even changing the algorithms used.

This would enable us to substantially reduce the time and cost of fixing bugs and improving performance.

In the adaptive gain example, if we could alter the gain algorithm while the test was running, we could implement and test our fix much more quickly.

Unfortunately, standard embedded development environments offer limited support for this kind of real-time, on-the-fly modification.

While we couldn’t eliminate the build-compile-download cycle, with the right platform we knew we could rapidly create and test algorithms and systems before beginning embedded development.

This would enable us to work out the bugs in our system before we started to implement it on an embedded target.

To do this, however, we needed a platform that incorporated human listeners, audio hardware and signal processing in the same system. We needed a platform that enabled us to make changes to our system as it was operating. Of course, we also wanted it to be flexible and cost-effective.

For some time now, it’s been possible to perform real-time audio processing on PCs. Unfortunately, this has required custom software development or dedicated audio processing software.

Custom software gives flexibility, but is very expensive. In contrast, dedicated audio processing software is less expensive, but considerably less flexible.

Due to improvements in PC processing power, it has recently become possible to model audio processing systems in real time using flexible and readily available simulation software. By linking simulation software to our audio hardware, we can create a development and test platform that’s both flexible and affordable.

Our platform consists of three elements: a standard PC, simulation software and external audio hardware (see Figure 1).

 
Figure 1: The development and verification platform.

The PC includes an audio stream input/output (ASIO) sound card. ASIO is necessary to guarantee sample-accurate synchronisation and a fixed processing delay between transmit and receive signals. The sound card handles audio input and output and serves as a digital audio data interface for the simulation software.

The simulation software is Simulink from MathWorks. We chose Simulink because it offers several key advantages. First, it links well with external hardware, including most ASIO sound cards, which is crucial for our application.

Second, it is a visual design environment allowing engineers to interact with models as they build and execute them. Lastly, it enables us to change model parameters while a simulation is running.

The audio hardware links the simulation software to the user and the telephone network. The wireline telephony system operates at much higher voltages than audio electronics, so we electrically isolate it from the rest of the system.

The analog audio signals from the phone network and from the microphone are converted into a digital form (using pulse-code modulation or PCM) by a MOTU FireWire 828mk2 audio I/O box, using the ASIO software interface. The digital audio data is fed to the PC over a FireWire connection.

The only custom software development needed in the entire project was a Simulink block that reads and writes data to the ASIO interface.

On this platform, we execute the majority of our signal processing algorithms in Simulink in real time. For efficiency, we typically use Simulink Rapid Accelerator mode, which speeds up simulation. While the simulation is executing, we can interact with the Simulink model and change parameters.

For example, we can change the gain during a live call. We can even switch to completely different audio processing algorithms during tests. This capability enables us to compare different echo cancellation algorithms, for example.

Once our testing is complete and we are satisfied that our signal processing system is behaving correctly, we implement the system on our embedded target. We use the same compile-build-download cycle as before.

This time, however, we have thoroughly simulated the system and worked out almost all the errors before we start embedded development. Using this process, we’ve substantially reduced development and verification time.

 
Figure 2: A simulink model of the echo cancellation system.

We use our platform not only for audio algorithm development but also for algorithm and system verification. Our platform can be used for wired headsets, as well as Bluetooth compatible headsets via a Plantronics Bluetooth USB dongle for audio I/O.

We develop individual algorithms in MATLAB or C and assemble them into a system using Simulink. We test the algorithms by using Simulink to generate test signals (for example, a sine chirp).

Using Simulink’s graphing capabilities we can examine signal properties such as energy spectral density.

Analysing signal properties is a fairly standard procedure for audio development work, but our platform enables us to take things a step further by testing algorithms during live phone calls. During algorithm and system verification, we connect the hardware into our simulation model.

With the model running in real time, we conduct live phone calls using our platform. A golden ears listener can take part in a conference call, for example, and tweak various parameters to improve the audio quality during the call.

Echo cancellation algorithms provide a good example of our process. Without signal processing, users will hear echoes and howling due to feedback between the microphone and the earphones. Cancellation of this echo is not trivial. There are two audio inputs into the system: the input from the telephone network and audio picked up by the microphone.

Echo cancellation must take into account both audio sources and cancel out the signal appropriately so the user only hears the audio from the telephone network. There are several echo cancellation techniques that can be used, each with its own set of parameters for finetuning.

At Plantronics, we implemented two different echo cancellation algorithms in Simulink (see Figure 2) and simulated them to see which worked best.

To test the echo cancellation algorithms, we implemented them in our model and connected our platform to the telephone network. During a live conference call, we modified the algorithm settings to improve the audio quality.

We evaluated the quality of the system in real time under a range of operating conditions, including call volume levels.

On the same call, we also switched from one echo cancellation algorithm to another to compare clarity under the same conditions. It was easy to compare the clarity of the two algorithms because we could switch between them without recompilation and without stopping the simulation or the call.

In addition to echo cancellation, headsets need line cancellation algorithms to handle echoes introduced by the telephone network and they need dynamic range controls, which alter the audio volume to boost low volume sounds and limit higher volume sounds.

Other algorithms are needed to comply with legislative mandates. For example, the European Union requires headsets to have antistartle properties, which limit how quickly headset volume can increase.

We are using our platform to rapidly develop and verify all these algorithms and more. We also use it to make better informed decisions on our bill of materials.

For example, speaker and microphone equalisation enables cheaper transducers to give better audio quality, and our platform enables us to evaluate our options.

Driven by market imperatives, Plantronics developed an innovative development and verification platform to shorten our design, development and test cycles. The platform itself is cost-effective, because it is based on a standard PC Simulink simulation software and standard audio equipment.

More importantly, by enabling us to examine and modify the system as calls were in progress, the platform provides a level of insight and an understanding of system properties that we lacked previously.

This enables us to work through most design issues in simulation and thereby streamline our embedded target development.

Typically, increased development speed comes at the expense of product quality or increased budgets. Our approach has enabled us to improve in all three dimensions simultaneously.

We’ve accelerated development, kept costs down and delivered better sounding headsets.

By Richard Hodges, principal scientist, Plantronics

Related Articles

Ricoh chooses u-blox module for long-lasting GNSS performance

The Ricoh Theta X camera incorporates the ZOE-M8B GNSS module from u-blox, allowing users to...

Single-step 3D printing method to create tiny robots

The breakthrough enables the mechanical and electronic systems needed to operate a robot to be...

Fast inspection technique for modern semiconductor devices

Researchers developed a novel approach to inspect and measure critical dimensions of...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd