By Annesya Banerjee, Unusual Solutions Finalist
In the modern high-tech world, Unmanned Aerial Vehicles—or drones—are a fascinating topic. From aerial videography and entertainment to surveillance and delivery, by varying the embedded sensors, drones are being deployed across sectors and scenarios. Following a disaster, the cameras embedded in drones gather high-quality images to look for victims. But studies reveal that in situations with low or almost no light or occlusions, drone-mounted cameras falter, restricting their effectiveness in search and rescue operations. In these types of cases, audio signals play a crucial role when visual sensors fail. Often, rescue teams rely on sounds to track victims who are out of sight. So wouldn’t it be fantastic if drones could rely on aural perception like humans?
Just imagine a drone with an embedded mechanism similar to the human ear, and whenever you call it, the drone could determine where you are and then appear in front of you. If that intelligence can be embedded within the drone, it’s possible to design more effective search operations.
Like “Rome wasn’t built in a day,” this critical concept also took time to be developed and reach the implementation stage. The origin of it lies back in an incident almost two years ago. A friend camping in a forest was separated from his group. Without a cell signal, he was unable to use a phone to call for help. As night set in, the team had no other option but to call his name aloud and hope for a response. Despite receiving several responses from him, the ground that they could cover on foot was limited, so my friend was rescued the following morning. The rescue may have occurred more quickly with more manpower to track his sound, but upon further research and reflection, I came upon a more compact and implementable concept.
With the critical features in mind, I started exploring the related domains. The research says drones provide maximum efficiency in search and rescue operations. In searching after disasters, calls for help from those in distress is a critical clue for our proposed technology. It needs to be reliable, effective, easy-to-use, and affordable so that local experts and first response teams can benefit from its use.
The drone would utilize a fascinating concept of audio technology called Sound Source Localization to identify and track by sound. The algorithm uses the audio signal acquired by an array of microphones to estimate the time delay of arrival, which in turn calculates the audio source coordinates in the Azimuth-Elevation plane using geometric approaches. When the sound source localization system is embedded in the drone, the estimated coordinates are not the absolute position; instead, it’s relative to the drone coordinates. Thus, the proposed method also keeps track of the current drone coordinates using GPS tracking. The processor also needs to eliminate irrelevant noise signals, like noise from wind or drone mechanicals, technically known as ego noise, which would also be captured by the sensors. This noise elimination is necessary to retrieve the original sound emitted by the distressed. Our system incorporates a fast and efficient processing approach based on Optimization for the complete analysis. The system ultimately identifies the sound source coordinates, which is equivalently the position of the victim shouting for help. The audio acquisition system is Matrix Creator.
This is our developed system so far:
Apart from the aspects of audio processing, the technology requires expertise in communications technology to ensure a stable link between the sensor array and the ground station for high-speed data-sharing in disaster-affected or remote areas (e.g., forests or mountains). For a relatively smaller range, the WiFi communication system was built using Sixfab 4G/LTE module with Raspberry Pi. In actual scenarios, the drone—with all the modules embedded—would fly over the search area, continuously gathering data for analysis, and sending the estimated source location to the rescue team at the ground station system.
Here some results from our analysis have been presented: