Localization, in the context of computer vision and robotics, refers to the process of determining the spatial coordinates or position of an object or a sensor in a given environment. The goal is to precisely identify the location of an object or device relative to a coordinate system, often in terms of its three-dimensional (3D) position.

Posts

LDP-Feat: Image Features with Local Differential Privacy

LDP-Feat: Image Features with Local Differential Privacy Modern computer vision services often require users to share raw feature descriptors with an untrusted server. This presents an inherent privacy risk, as raw descriptors may be used to recover the source images from which they were extracted. To address this issue, researchers recently proposed privatizing image features by embedding them within an affine subspace containing the original feature as well as adversarial feature samples. In this paper, we propose two novel inversion attacks to show that it is possible to (approximately) recover the original image features from these embeddings, allowing us to recover privacy-critical image content. In light of such successes and the lack of theoretical privacy guarantees afforded by existing visual privacy methods, we further propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage regardless of the strength of the attacks. In addition, our method yields strong performance in visual localization as a downstream task while enjoying the privacy guarantee.

Ambient Noise based Weakly Supervised Manhole Localization Methods over Deployed Fiber Networks

Ambient Noise based Weakly Supervised Manhole Localization Methods over Deployed Fiber Networks We present a manhole localization method based on distributed fiber optic sensing and weakly supervised machine learning techniques. For the first time to our knowledge, ambient environment data is used for underground cable mapping with the promise of enhancing operational efficiency and reducing field work. To effectively accommodate the weak informativeness of ambient data, a selective data sampling scheme and an attention-based deep multiple instance classification model are adopted, which only requires weakly annotated data. The proposed approach is validated on field data collected by a fiber sensing system over multiple existing fiber networks.

Drone Detection and Localization using Enhanced Fiber-Optic Acoustic Sensor and Distributed Acoustic Sensing Technology

Drone Detection and Localization using Enhanced Fiber-Optic Acoustic Sensor and Distributed Acoustic Sensing Technology In recent years, the widespread use of drones has led to serious concerns about safety and privacy. Drone detection using microphone arrays has proven to be a promising method. However, it is challenging for microphones to serve large-scale applications due to the issues of synchronization, complexity, and data management. Moreover, distributed acoustic sensing (DAS) using optical fibers has demonstrated its advantages in monitoring vibrations over long distances but does not have the necessary sensitivity for weak airborne acoustics. In this work, we present, to the best of our knowledge, the first fiber-optic quasi-distributed acoustic sensing demonstration for drone surveillance. We develop enhanced fiber-optic acoustic sensors (FOASs) for DAS to detect drone sound. The FOAS shows an ultra-high measured sensitivity of −101.21 re. 1rad/µPa, as well as the capability for high-fidelity speech recovery. A single DAS can interrogate a series of FOASs over a long distance via optical fiber, enabling intrinsic synchronization and centralized signal processing.We demonstrate the field test of drone detection and localization by concatenating four FOASs as DAS. Both the waveforms and spectral features of the drone sound are recognized. With acoustic field mapping and data fusion, accurate drone localization is achieved with a root-mean-square error (RMSE) of 1.47 degrees. This approach holds great potential in large-scale sound detection applications, such as drone detection or city event monitoring.

RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensor Fusion

RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensor Fusion The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today’s solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments – a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously.In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, – passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR’S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.

RoVaR: Robust Multi agent Tracking through Dual layer Diversity in Visual and RF Sensor Fusion

RoVaR: Robust Multi agent Tracking through Dual layer Diversity in Visual and RF Sensor Fusion The plethora of sensors in our commodity devices provides a rich substrate for sensor fused tracking. Yet, today’s solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously. In this work, we take an important step towards this goal by introducing the notion of dual layer diversity to the problem of sensor fusion in multi agent tracking. We demonstrate that the fusion of complementary tracking modalities, passive/relative (e.g., visual odometry) and active/absolute tracking (e.g., infrastructure assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data driven (for accuracy) approaches. RoVaR is an embodiment of such a dual layer diversity approach that intelligently attends to cross modal information using algorithmic and data driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal RoVaR’s multi dimensional benefits in terms of tracking accuracy (median of 15cm), robustness (in unseen environments), light weight (runs in real time on mobile platforms such as Jetson Nano/TX2), to enable practical multi agent immersive applications in everyday environments.

SkyRAN: A Self-Organizing LTE RAN in the Sky

SkyRAN: A Self-Organizing LTE RAN in the Sky We envision a flexible, dynamic airborne LTE infrastructure built upon Unmanned Autonomous Vehicles (UAVs) that will provide on-demand, on-time, network access, anywhere. In this paper, we design, implement and evaluate SkyRAN, a self-organizing UAV-based LTE RAN (Radio Access Network) that is a key component of this UAV LTE infrastructure network. SkyRAN determines the UAV’s operating position in 3D airspace so as to optimize connectivity to all the UEs on the ground. It realizes this by overcoming various challenges in constructing and maintaining radio environment maps to UEs that guide the UAV’s position in real-time. SkyRAN is designed to be scalable in that it can be quickly deployed to provide efficient connectivity even over a larger area. It is adaptive in that it reacts to changes in the terrain and UE mobility, to maximize LTE coverage performance while minimizing operating overhead. We implement SkyRAN on a DJI Matrice 600 Pro drone and evaluate it over a 90 000 m2 operating area. Our testbed results indicate that SkyRAN can place the UAV in the optimal location with about 30 secs of a measurement flight. On an average, SkyRAN achieves a throughput of 0.9 – 0.95X of optimal, which is about 1.5 – 2X over other popular baseline schemes.

SkyLiTE: End-to-End Design of Low-altitutde UAV Networks for Providing LTE Connectivity

SkyLiTE: End-to-End Design of Low-altitutde UAV Networks for Providing LTE Connectivity Un-manned aerial vehicle (UAVs) have the potential to change the landscape of wide-area wireless connectivity by bringing them to areas where connectivity was sparing or non-existent (e.g. rural areas) or has been compromised due to disasters. While Google’s Project Loon and Facebook’s Project Aquila are examples of high-altitude, long-endurance UAV-based connectivity efforts in this direction, the telecom operators (e.g. AT&T and Verizon) have been exploring low-altitude UAV-based LTE solutions for on-demand deployments. Understandably, these projects are in their early stages and face formidable challenges in their realization and deployment. The goal of this document is to expose the reader to both the challenges as well as the potential offered by these unconventional connectivity solutions. We aim to explore the end-to-end design of such UAV-based connectivity networks particularly in the context of low-altitude UAV networks providing LTE connectivity. Specifically, we aim to highlight the challenges that span across multiple layers (access, core network, and backhaul) in an inter-twined manner as well as the richness and complexity of the design space itself. To help interested readers navigate this complex design space towards a solution, we also articulate the overview of one such end-to-end design, namely SkyLiTE– a self-organizing network of low-altitude UAVs that provide optimized LTE connectivity in a desired region.