OneDegree

Repurposing Wireless Networks for Imaging

Systems and Methods

We develop systems and methods that exploit the infrastructure and dynamic nature of wireless networks to achieve joint communication and high resolution imaging. Specifically we focus on three contexts: a) reuse: no change to signaling or infrastructure, (b) renew: only changes in signaling and (c) redesign: complete redesign of infrastructure and signaling.

Traditional wireless communication and radar systems use two separate waveforms to perform communication and sensing. However, employing a common waveform can help reduce cross-system interference and improve performance in both the communication and sensing tasks. In [C4] we propose a joint multi-user communications and multi-direction radar beamforming design that reuses a communication waveform for both communication and sensing tasks.

In large-scale wireless networks factors such as device power limitations and variability in wireless channel quality can cause the network to have different levels of connectivity between agents. This requires the development of imaging algorithms that can operate on wireless networks with different levels of connectivity. We propose a solution to the aforementioned problem in [C9] where we introduce the Distributed Generalized Wirtinger Flow (DGWF) algorithm. DGWF is a distributed algorithm that performs interferometric imaging over wireless networks.

When one tries to do imaging with a network of imaging sensors, the local collected data could have inherently different statistics as it collects it from different viewpoints. Therefore one challenge is to build a learning model that uses these data to enable tasks such as classification. Moreover, one could also use these heterogeneous data to build individual models at each client, each of which might not have enough data to build its own imaging learning model. Such distributed learning falls into the general paradigm of federated learning (FL). In [C7], we introduced a FL algorithm QuPeD that facilitates collective training with heterogeneous clients while respecting resource diversity, where one could build different sized models at each client. Numerically, we validated that QuPeD outperforms the state-of-the-art in a heterogeneous settings; this work was presented at the top-tier ML conference NeurIPS.

In doing distributed learning for imaging, we use data collected at client nodes to build a common learning model, as done in federated learning (FL). However, for wireless networks, communication bandwidth is limited and therefore one needs to develop efficient learning mechanisms cognizant of these constraints. Moreover, in most training algorithms accelerations (such as Nesterov or Polyak momentum) are incorporated for faster convergence and empirically yield better learning performance. We designed and analyzed the first communication efficient (compressed) algorithms for decentralized learning with momentum. In [J3] that was published in the IEEE Journal of Selected Areas in Information Theory, 2021, we proposed and analyzed SQuARM-SGD, a decentralized training algorithm, employing momentum and compressed communication between nodes. In SQuARM-SGD, each node does local SGD steps using Nesterov’s momentum and then sends sparisified and quantized updates to its neighbors. We provided the first known convergence guarantees of compressed decentralized learning for strongly-convex, convex and non-convex smooth objectives with momentum, demonstrating that it matches the convergence rate of vanilla distributed SGD in these settings. We corroborated our theoretical understanding with experiments and compare the performance of our algorithm with the state-of-the-art, showing significant gains. In [J2] we proposed an alternate approach, that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive. Our approach combined aspects of: (1) sample selection; (2) dataset quantization; and (3) gradient compensation. We analyzed the convergence of the proposed approach for smooth convex and non-convex objective functions and show that we can achieve order optimal convergence rates with communication that mostly depends on the data rather than the model (gradient) dimension. We use our proposed algorithm to train ResNet models on the CIFAR-10 and ImageNet datasets, and show that we can achieve an order of magnitude savings over gradient compression methods.

Theoretical Foundations

Experimental Testbeds

Broader Scientific Impacts

Systems and Methods