Machine learning for radio frequency applications

How do we generate robust, real-time machine learning algorithms with integrated hardware and software for radio frequency applications?


For many years machine learning approaches have been successfully applied to numerous detection and classification tasks from image processing to voice separation and text recognition. However, it is only recently that similar techniques have been applied to the processing of radio frequency (RF) signals and the electromagnetic environment (EME).The EME is becoming more congested, contested and complex. This can be seen by:

  • Machine learning enabled RF smart sensor systems being deployed in domains as diverse as medical diagnostics, driverless vehicles, satellites, defence and agriculture.
  • An increase in the deployment of Internet of Things (IoT) devices.
  • An increase in more complex communications in the form of MIMO rollout of 5G and development of WiFi6.
  • The number of commercial and civilian satellite launches with synthetic aperture radar (SAR) capability set to pass 60 this year. (Rosen, J., 2021).

It is therefore becoming increasingly difficult for a human in the loop to handle the flood of information. This is resulting in the adoption of deep learning approaches for the detection, classification, identification and transmission of signals.

The development of new technologies for the automated, real-time processing and analysis of radio frequency data requires domain specific expertise that is spread across multiple organisations and disciplines. This special interest group aims to build a community of machine learning (ML) for RF researchers and to run a series of theme lead workshops covering the applications and challenges in this domain.

Explaining the science

Machine learning (ML) for RF degradation and resilience

Many of the characteristics of RF signals that are exploited to enable long range imaging, transmission and communication without direct line of sight, create a new set of challenges and opportunities for ML algorithms intended to learn and monitor activity.  Examples of this include RF propagation effects from multipath in urban environments and diffraction from high water vapour content in the atmosphere. Similarly, the development of covert capability such as passive radar and low probability of intercept waveforms mean ML algorithms need to be resilient to a wide range of dynamic ranges, interference and low signal to noise ratios. Both military and commercial radars are exhibiting ever increasing levels of agility across multiple parameters and over short timescales. Such signals provide a challenge for electronic surveillance receivers attempting to detect, cluster, separate and identify radars in a contested and congested EME. Processing techniques relying on a-priori knowledge of expected signals in the environment will be limited in their performance, and as such this provides an opportunity for the application of novel ML approaches to the aforementioned processes.

The modern agility of radars provides both a challenge for detection but an opportunity for the application of novel approaches for spectrum sharing and waveform distribution and design. Traditionally the spectrum was managed by operating comms systems within a fixed bandwidth. ML approaches, e.g., in cognitive radios and radars, are now being used to adaptively change transmission parameters to improve spectrum utilization, optimize channel conditions and enable adaptive routing between multiple nodes and networks (Deepwave, 2021).  To meet the demand for automatic network recognition and to build resilience in hostile environments, we need to be able to detect and classify overlapping RF signals from multiple sources operating over ever-increasing frequency bandwidths.

Machine learning techniques are increasingly being explored for protection against jamming and deception. It provides the means to see anomalies and unusual patterns. Being able to counter jamming requires the ability to detect the signal and automatically adapt to it. This could be by adapting your waveform or moving to another part of the EME. ML for jamming and deception detection requires an understanding and improved awareness of the operational EME. Spoof detection requires algorithms capable of identifying and distinguishing features often based on higher-order statistics and thus lends itself to ML. As radar systems gradually move towards using ML techniques themselves, waveform structure, timing and agility may all be used to concurrently optimise probability of detection while avoiding interception by an adversary. As such ML may be the only feasible concept for exploiting such signals.

Machine learning resilience in contested environments necessitates strong verification and validation of algorithms that requires drawing from a large community of experts. Developing universal test sets of data is crucial for benchmarking codes and explainability of methodologies. It is critical for user confidence and wider adoption that we move away from using ML algorithms as a black box, explore new methods for explainability of network performance, and start to encode uncertainties in our decision making and predictions. In training ML algorithms, the importance of pre-processing and choice of features and embeddings can often be overlooked compared to the choice of ML architectures and hyperparameter fine-tuning. It is important when testing algorithms to identify which parts of a new algorithm contribute to better performance as well as having a universal set of metrics to use for testing.

To tackle the scarcity of tagged real datasets synthetic dataset creation is in many cases being used to augment datasets. This is of particular relevance in defence, where complete databases of signals may not be available. As such development of, validation and verification of sufficiently large, variable and realistic datasets consisting of both real and synthetic data is of particular interest. Generating realistic RF datasets that incorporate the interactions between multiple sensors and consider interference is a big challenge. To auto-generate datasets that are representative of different types of real data we also need automatic methods for feature extraction which reflect aspects such as characteristic parameter ranges, and skews of distributions. We then need to find ways to map these features onto RF functional IDs and to understand how we can use features to identify and explain phenomena causing signal interactions with the environment.

Multi-source signal fusion and distribution

Multi-sensor distributed systems measure parameters independently then use signal processing techniques to combine observations. Distributing signals across multiple sensors can make operations more covert, increase platform agility, allow rapid switching between modalities and help to solve trade-offs between platform performance and Size Weight and Power (SWAP). Being able to integrate observations from multiple sensors can improve accuracy, reliability, and detectability, reduce ambiguity, increase spatial-temporal ranges, enhance resolution, increase the dimension of target observations, and help to resolve multipath, and improve SNR (Kong et al, 2020).

Enhanced integration of multi-platform systems operating in an agile and real-time way requires novel multi-source signal fusion and distribution techniques. ML techniques are being explored for rapid, efficient, automatic allocation, transmission and reception of signals across multiple platforms. Distributed systems need very accurate position and timing information. Whilst GPS and atomic clocks can help to maintain good coherence, signal processing is still currently used post acquisition to make a number of corrections. This is an example of the kind of operations which may begin to be replaced by ML algorithms to improve coherence, and perform timing and positioning corrections and adjustments in real time. How we acquire and integrate data from multi-user distributed sensors and use them to cross validate each other has many solutions in the realm of embedded hardware and software.

RF embedded hardware and software

A drive towards real-time distributed processing at the edge with reduced human in the loop is pushing solutions towards embedded hardware and software approaches. Hybrid computing architectures, and software defined radios for ML applications are rapidly advancing areas of technology from embedded control, to autonomy and Artificial Intelligence (AI). The strong coupling between hardware and software in the RF domain and the use of purpose-built deep learning accelerators will need to be exploited to meet future requirements for data retrieval and transmission as well as considerations of SWAP. For signal detection it will be desirable to adjust the amount of power investment to make it proportional to the level of interest in a particular signal, and we will be looking to determine whether a signal is interesting as early as possible (Mullins, R., 2021). Multi-purpose RF sensors with ML capability using embedded hardware and software will be used to detect RF signals including Wi-Fi, Bluetooth and cellular to exploit the order of magnitude mark up in speed compared to conventional techniques.

In the case of multiple sensors, we will be looking to control and adapt the power consumption, parameters and precision of each sensor to optimise our use of the available power. There will be opportunities to co-design sensors, pre-processing and neural networks (Mullins. R, 2021). We are beginning to see frameworks designed to generate efficient neural network accelerators perform automatic transferral of machine learning architectures to FPGAs (Mullins, R., 2020). This has multiple applications notably for improved situational awareness. Strategies for early exit from inference at different stages in network architectures are beginning to be explored (Laskaridis, S. et al, 2020). In-network computing is being used to offload standard applications to network devices to increase throughput by processing data as it traverses the network (Zilberman, N., 2020). In-network data processing on wireless sensor nodes can be used to collect data at multiple distributed sources and aggregate it on the way to its final destination (Leung.K, 2020). There is great potential for the use of ML for data aggregation and resource optimisation and allocation. Dynamic hardware adaptation is already enabling in-orbit satellite updates and partial reconfigurations. Autonomous, unmanned vehicles will require automatic algorithm updates to embedded hardware to meet changes in the environment, cross platform modifications and advances in technology often on legacy hardware.

Developing efficient ML solutions on smaller platforms requires the reduction of models, dynamic compression, compact representations and knowledge distillation using techniques such as pruning of networks, improving performance in lower precision modes, dimensionality reduction, and sparse layer representations. We need to have a good understanding of when COTs solutions are fit for purpose and situations where we require custom specialised hardware. There are a number of choices to be made about what processing should be done in hardware, what to do in software, where to perform computations at the edge and when to push back to the cloud. The answers to some of these questions are in many cases strongly linked to requirements for data security and anonymization.

Application specific ML for RF

ML for RF covers a wide range of scales in terms of distances, frequencies, and applications.  Small scale passive systems are used for monitoring health and in a COVID world and beyond wireless IoT technologies dominate our day-to-day home lives. Signals intelligence, electronic warfare and communications are increasingly seeing the need to develop new approaches to automate the detection, classification, and identification of signals, from urban scale analytics to larger scale signals intercept on airborne platforms for situational awareness. At an earth observation scale Interferometric Synthetic Aperture Radar (InSAR) is being used to automatically extract features in the difference in phase between satellites. This is being used to detect earthquakes, monitor subsidence, and track ice flows to monitor the effects of climate change. All of these processes cover a range of frequencies from oscillations on the scale of an atom to the size of a football pitch. Our ability to successfully deploy ML algorithms at such a wide range of scales depends on our ability to successfully adapt solutions to domain specific applications.


Deepwave digital, 2021,

Kong, L et al 2020 Int. J. Extrem. Manuf. 2 022001

Laskaridis, S., Venieris, S. I., Kim, H., Lane, N. HAPI: Hardware-Aware Progressive Inference, arXiv:2008.03997, DOI: 10.1145/3400302.3415698

Leung, K. The Alan Turing Institute Edge Computing for Earth Observation Workshop Abstracts, 2020

Mullins, R. The Alan Turing Institute Edge Computing for Earth Observation Workshop Abstracts, 2020

Mullins, R. email correspondence 2021

Rosen, J. Fleets of radar satellites are measuring movements on Earth like never before, 25 Feb 2021

Zilberman, N. The Alan Turing Institute Edge Computing for Earth Observation Workshop Abstracts, 2020

Talking points

  • Understanding RF multipath in urban environments
  • Synthetic RF data generation for machine learning augmentation
  • RF machine learning approaches in low SWAP scenarios through the co-design of hardware and software
  • Machine learning approaches for the detection of low probability of intercept waveforms
  • Network analysis of ad-hoc RF communications networks
  • Machine learning resilience in contested environments
  • Efficiently combining data from multiple distributed RF sensors
  • Anomaly detection in cluttered electromagnetic environments
  • Validation and verification of RF machine learning approaches
  • Explainability of machine learning approaches
  • The application of machine learning approaches to the de-interleaving of pulses, specific emitter identification and geolocation

Recent updates

The most recent interest group meeting was on 10 January. The focus of this meeting was machine learning for communications applications. With increased reliance on Internet of Things (IoT) devices and more complex communications in the form of MIMO rollout of 5G and development of WiFi6 machine learning (ML) approaches are being widely adopted in communications. In this meeting we found out more about advances in this domain including machine learning approaches for network management and operation, passive communications network topology reconstruction, radio frequency fingerprinting, and ML solutions for developing a 6G network with low latency, high data rate and capacity, secure communications and reliable data connectivity.  

You can view the agenda here.

Please contact Victoria Nockles if you have any questions about the format or email [email protected] if you wish to attend.


Contact info

[email protected]

External researchers

Richard Walters, Durham University
Matthew Ritchie, UCL 
Michael Woollard, UCL
Robert Mullins, University of Cambridge 
Kin Leung, Imperial College London 
Daniel Andre, Cranfield University
Michail Antoniou, University of Birmingham