Teaching tomorrow's vehicles to 'hear'


Tuesday, 18 February, 2020


Teaching tomorrow's vehicles to 'hear'

Modern cars already feature a range of sophisticated systems, but the self-driving cars of the future will also have auditory capabilities.

Today’s cars are equipped with a host of advanced driver-assistance systems designed to reduce the burden behind the wheel, with features such as automatic parking and blind-spot monitoring employing cameras, radar and lidar to detect obstacles in the immediate vicinity of the vehicle. In other words, they provide vehicles with a rudimentary sense of sight.

In the future, systems that can capture and identify external noises are set to play a key role — along with smart radar and camera sensors — in putting self-driving cars on the road. Researchers at the Fraunhofer Institute for Digital Media Technology IDMT are now developing AI-based systems that can recognise individual acoustic events such as sirens. These will give vehicles auditory capability — that is, it will endow them with a sense of hearing.

“Despite the huge potential of such applications, no autonomous vehicle has yet been equipped with a system capable of perceiving external noises,” said Danilo Hollosi, Head of the Acoustic Event Recognition group at Fraunhofer IDMT. “Such systems would be able to immediately recognise the siren of an approaching emergency vehicle, for example, so that the autonomous vehicle would then know to move over to one side of the highway and form an access lane for the rescue services.”

There are numerous other scenarios in which an acoustic early-warning system can play a vital role — when an autonomous vehicle is turning into a pedestrian area or residential road where children are playing, for example, or for recognising defects or dangerous situations such as a nail in a tyre. In addition, such systems could also be used to monitor the condition of the vehicle or even double as an emergency telephone equipped with voice-recognition technology.

Noise analysis with AI-based algorithms

Developing a vehicle with auditory capability poses a number of challenges. Here, however, Fraunhofer IDMT can call on specific project experience in the field of automotive engineering as well as a wealth of interdisciplinary expertise. Key areas of investigation include signal capture on the basis of optimal sensor positioning as well as signal preprocessing, signal enhancement and the suppression of background noise.

The system is first trained to recognise the acoustic signature of each relevant sound event. This is done by machine-learning methods that use acoustic libraries compiled by Fraunhofer IDMT. In addition, Fraunhofer IDMT has written its own beamforming algorithms. These enable the system to dynamically locate moving sound sources such as the siren on an approaching emergency vehicle. The result is an intelligent sensor platform that is able to recognise specific sounds.

A modified roof fin for testing acoustic sensors for the capture of external noise. Image ©Fraunhofer IDMT/Hannes Kalter.

Fraunhofer has also written its own AI-based algorithms. These are used to distinguish the specific noise that the system is designed to identify from other background noises.

“We use machine learning,” Hollosi explained. “And to train the algorithms, we use a whole range of archived noises.” Fraunhofer and partners from industry have already created initial prototypes, which should be reaching market maturity by the middle of the coming decade.

The acoustic sensor system comprises microphones, a control unit and software. The microphones, installed in a protective casing, are mounted on the outside of the vehicle, where they capture airborne noise. Sensors transmit these audio data to a special control unit that then converts them into the relevant metadata. In many other areas of use — such as security applications, the care industry and consumer products — the raw audio data are directly converted to metadata by smart sensors.

Modified versions of this computer-based process for identifying acoustic events can be used in other sectors and markets. Such applications include quality control in industrial manufacturing. In this case, smart battery-powered acoustic sensors are used to process audio signals from plant and machinery. This information is sent wirelessly to a processor. On this basis, it is possible to determine the condition of the production plant and pre-empt any imminent damage. Other applications include automatic voice-recognition systems to enable hands-free documentation by technicians conducting, for example, turbine maintenance.

Top image credit: ©stock.adobe.com/au/sittinan

Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletter and bimonthly magazine.

Related Articles

Ultra-thin fibres can turn clothes into wearable electronics

Researchers have developed ultra-thin semiconductor fibres that can be woven into fabrics,...

Sound-powered sensors stand to save millions of batteries

Researchers at ETH Zurich have developed a sensor that utilises energy from sound waves to...

Nordic-powered sensors and Memfault remotely supervise machinery lubrication management

The GreaseBoss Endpoint employs edge processing capabilities of nRF52833 SoC and Memfault's...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd