The metamaterial disc, and one of 36 passages down which sound travels. From ref 1.
Artificial-intelligence researchers have long struggled to make computers perform a task that is simple for humans: picking out one person’s speech when multiple people nearby are talking simultaneously.
The human ear is not able to distinguish how the sound is altered by different passages, says lead author Yangbo Xie, also at Duke. But the team wrote an algorithm that, by analysing each sound, can almost always tell which direction it came from.
The device is an 'acoustic metamaterial’: a structure patterned with smaller features and designed to affect the acoustic waves that pass through it. Bruce Drinkwater, a mechanical engineer at the University of Bristol, UK, calls the idea “a really nice one”. He says that the device’s bulk could be a limitation to its practical use, and that this version works only at relatively high frequencies. However, he adds that “there could be plenty of room to optimize the design for size in the future.”
see also:
No comments:
Post a Comment