Using machine learning to improve the reliability of wireless communication systems

A collaboration involving the Turing has developed a new method for improving wireless technologies such as Wi-Fi

Wednesday 15 Sep 2021

Filed under

The COVID-19 pandemic has made us all increasingly aware of our reliance on wireless communications via our phones, laptops and other devices. Thanks to our existing infrastructure, students are able to attend school or university online, a major section of the workforce can do its job remotely, and we can connect regularly with our loved ones. That being said, the authors of this blog won’t have been the only ones to have experienced an unstable Wi-Fi connection in the middle of a call! This is not surprising – our wireless systems can often be unreliable.

Why is that the case? Well, wireless systems make use of radio waves (a type of electromagnetic radiation) to carry information between devices, and the way these signals are transmitted can differ widely between settings. This creates challenges for those who are designing these systems.

Take, for example, the Wi-Fi signal in your home: the journey that the signal takes will vary greatly, depending on the location of your router and your device. The signal can only travel from your router to the device by passing through anything that’s blocking its way, and it will also bounce off surfaces that it encounters – and all of this will affect the signal. Of course, Wi-Fi is only one of many ways in which we use wireless signals to communicate. Other examples include our car’s GPS receiving directions from a satellite, talking on the phone in a moving train, or even NASA sending messages from Earth to Mars. In short, there’s a huge variation in the physical environments and obstacles encountered by these signals.

Clearly, accounting for these differences is essential to ensuring we have reliable communication systems. To do so, scientists and engineers are developing ever more complex mathematical models of how wireless signals interact with their environment. These models usually apply to a wide range of distances and environments, but need to be calibrated with data when adapted to individual scenarios. The process of calibrating a model with data can be thought of as turning the knobs on an old radio until you receive the best possible signal. In this analogy, the old radio is our model, and listening to the radio corresponds to collecting data. Our strategy for turning the knobs is the calibration method, and the number of knobs corresponds to the number of parameters in the model.

Calibration has been a significant technical challenge to date because existing methods usually depend on the specific model being calibrated. This means that a new calibration method needs to be developed every time a new model is proposed, which significantly slows down the innovation process. Fortunately, this is where statistics and machine learning can come in handy. In recent work, we proposed an algorithm based on approximate Bayesian computation (ABC) which is able to calibrate models that have vastly different mathematical structures. This was done by combining engineering expertise and machine learning tools to create a method which can automatically and reliably detect subtle differences in wireless signals, rather than directly making use of the mathematical structure of the model being calibrated.

As a result, our new method has the potential to significantly speed up the innovation process for wireless communication systems. We have decoupled the model from its calibration process, thus alleviating the need to develop new calibration methods for each new model. Engineers will therefore be able to more quickly develop accurate models of how signals are transmitted, and hence improve our communication systems.


About the project
This project is part of an ongoing collaboration between researchers at The Alan Turing Institute’s data-centric engineering programme, University College London, Aalborg University and the Finnish Centre for Artificial Intelligence (FCAI). It builds on the work of the fundamentals of statistical machine learning project, and is part of ongoing collaborations between the Turing and the FCAI.

The project was initially planned to take place during a six-month research visit by Ayush Bharti at the Turing in early 2020. Unfortunately, this didn’t work out due to the pandemic, but thankfully everyone’s Wi-Fi was good enough to withstand a long-term virtual collaboration. For the technically-minded reader, all details can be found in our paper, recently published in IEEE Transactions on Antennas and Propagation.

 

Image: ShutterOK / Shutterstock