Just one sensor can create 3D images of rooms via echolocation

May 8, 2021

The researchers were inspired by echolocation techniques already found in nature. (Unsplash/Courtnie Tosana)

Researchers at the University of Glasgow have developed a system that, after being trained, can create a three-dimensional image of a room, along with the moving objects or people within it, with nothing but a single sensor that sends and receives acoustic and radio waves as they bounce around the space.

The system, detailed in an April 30 Physical Review Letters article, has utility in health care and home settings. It lets users track the location and position of someone without revealing his or her identity, preserving a degree of privacy that would not be available with a camera. The system shows promise in numerous applications, such as monitoring sick patients in hospitals or the home. It could also potentially serve as a tool for researchers to track foot traffic in enclosed areas, stores and other gathering places.

"We've demonstrated that by combining very simple hardware — we've done this with just loudspeakers and a microphone and a laptop — with an intelligent technique, plus a powerful algorithm, you can sort of compensate for the lack of the spatial resolution of the sensor," Alex Turpin, a research fellow at the University of Glasgow and a co-first author on the paper, told The Academic Times. "When we use this sensor, we are emitting many waves in many different directions. And our parameter is just timing. How much time does it take for any of these waves to reach the sensor?"

The system, which Turpin developed with research partner and co-lead author Valentin Kapitany as well as other collaborators, has a very unique property: Instead of simply recording a single, back-and-forth reflection, as most devices do, it can track electromagnetic or audio frequencies as they reflect and refract off multiple surfaces, translating those waveforms into a three-dimensional view of the environment. 

The system sends out an array of radio waves or acoustic pulses that, in the authors' words, "flash-illuminate" the scene. In other words, they fill the room and are reflected by its surfaces, including human beings. The waves bounce from one object to another until they return to a sensor that acts as a stopwatch to record the precise moment of their return. These results are depicted as a histogram that a neural network, which has been trained in accordance with the shape of the room, can use to form an image of the environment as well as any objects within it. 

Although an initial training phase requires the use of a 3D camera to help train the deep learning algorithm, this camera can eventually be removed so that the algorithm can accurately depict an entire 3D space on its own, based entirely on the pulse emitter itself, whether that be an antenna or a microphone. The system incorporates relatively accessible hardware, including a PC microphone and speaker to generate acoustic waves and a radiofrequency emitter and transceiver for radio waves.

The researchers were inspired by the kinds of echolocation techniques that can already be found in nature, such as the sonic pulses that bats and dolphins send out to navigate their surroundings. Those animals are able to form a mental picture of their environment by detecting the length of time it takes echoes to bounce off various surfaces. And although the skill rarely comes naturally to humans, some people have learned to echolocate by clicking their mouths or striking a cane and listening for subsequent echoes. The ability can be especially acute in people who are blind, involving the same brain regions that would typically be responsible for visual processing.

Although most humans, even those incapable of echolocation, are excellent at evaluating and processing relevant sounds and simultaneously filtering out unnecessary ones, this tendency also limits our ability to detect more complicated sound signatures. "What you don't hear is the echo from your voice [when it bounces from] the first wall in your room, the second wall in your room, the ceiling — or else you would go totally crazy, right? We would be hearing this reverberation. It would be a disaster," Turpin explained. Meanwhile, "Our sensors don't have a brain themselves. They can be open for as much time as we want, and they can record as much time as we want."

Humans have developed many other types of devices for locating objects — including radar systems that send radio pulses into the atmosphere and sonar systems that send sound waves into the depths of the ocean. One modern system, echocardiography, even allows researchers to create a 3D visualization of a person's heart for non-invasive monitoring. But for engineers, the idea of reconstructing a 3D environment from scratch with oscillating waves alone, especially when that environment is inhabited by a group of people, remains an intimidating design challenge.

Although the system marks a step forward by creating 3D images out of relatively accessible components, there are several challenges that the researchers still hope to overcome. For one, the current setup involves a pre-programming phase during which users must teach their neural network how to locate objects within the confines of a single room. The team aims to eventually develop a more universal system so that any room could be easily outfitted with the sensor. Additionally, the researchers would like to explore the limits of the system's capabilities. "How many people can we identify in a room?" Turpin asked. "We can distinguish between two people, but can we do 10?

Despite its limitations, the system offers improvements over traditional camera setups, especially by ensuring anonymity in private locations. For instance, doctors and nurses could ensure that a patient is positioned correctly on a hospital bed — or even monitor his or her rate of breathing — with the use of the sensor. Similarly, caretakers could implement the sensors in the homes of people who are at risk of emergency medical issues without needing to repeatedly check on them in-person. 

"There are so many places where we use cameras now. And we don't have to. There are places where we would like to use a camera and we can't because of privacy," Turpin explained. "This technology could [take] that spot."

The study "3D imaging from multipath temporal echoes," published April 30 in Physical Review Letters, was authored by Alex Turpin, Valentin Kapitany, Jack Radford, Davide Rovelli, Kevin Mitchell, Ashley Lyons, Ilya Starshynov and Daniele Faccio, University of Glasgow.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.