Posts

Ambient Noise based Weakly Supervised Manhole Localization Methods over Deployed Fiber Networks

Ambient Noise based Weakly Supervised Manhole Localization Methods over Deployed Fiber Networks We present a manhole localization method based on distributed fiber optic sensing and weakly supervised machine learning techniques. For the first time to our knowledge, ambient environment data is used for underground cable mapping with the promise of enhancing operational efficiency and reducing field work. To effectively accommodate the weak informativeness of ambient data, a selective data sampling scheme and an attention-based deep multiple instance classification model are adopted, which only requires weakly annotated data. The proposed approach is validated on field data collected by a fiber sensing system over multiple existing fiber networks.

Drone Detection and Localization using Enhanced Fiber-Optic Acoustic Sensor and Distributed Acoustic Sensing Technology

Drone Detection and Localization using Enhanced Fiber-Optic Acoustic Sensor and Distributed Acoustic Sensing Technology In recent years, the widespread use of drones has led to serious concerns about safety and privacy. Drone detection using microphone arrays has proven to be a promising method. However, it is challenging for microphones to serve large-scale applications due to the issues of synchronization, complexity, and data management. Moreover, distributed acoustic sensing (DAS) using optical fibers has demonstrated its advantages in monitoring vibrations over long distances but does not have the necessary sensitivity for weak airborne acoustics. In this work, we present, to the best of our knowledge, the first fiber-optic quasi-distributed acoustic sensing demonstration for drone surveillance. We develop enhanced fiber-optic acoustic sensors (FOASs) for DAS to detect drone sound. The FOAS shows an ultra-high measured sensitivity of −101.21 re. 1rad/µPa, as well as the capability for high-fidelity speech recovery. A single DAS can interrogate a series of FOASs over a long distance via optical fiber, enabling intrinsic synchronization and centralized signal processing.We demonstrate the field test of drone detection and localization by concatenating four FOASs as DAS. Both the waveforms and spectral features of the drone sound are recognized. With acoustic field mapping and data fusion, accurate drone localization is achieved with a root-mean-square error (RMSE) of 1.47 degrees. This approach holds great potential in large-scale sound detection applications, such as drone detection or city event monitoring.

RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensor Fusion

RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensor Fusion The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today’s solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments – a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously.In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, – passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR’S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.

RoVaR: Robust Multi agent Tracking through Dual layer Diversity in Visual and RF Sensor Fusion

RoVaR: Robust Multi agent Tracking through Dual layer Diversity in Visual and RF Sensor Fusion The plethora of sensors in our commodity devices provides a rich substrate for sensor fused tracking. Yet, today’s solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously. In this work, we take an important step towards this goal by introducing the notion of dual layer diversity to the problem of sensor fusion in multi agent tracking. We demonstrate that the fusion of complementary tracking modalities, passive/relative (e.g., visual odometry) and active/absolute tracking (e.g., infrastructure assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data driven (for accuracy) approaches. RoVaR is an embodiment of such a dual layer diversity approach that intelligently attends to cross modal information using algorithmic and data driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal RoVaR’s multi dimensional benefits in terms of tracking accuracy (median of 15cm), robustness (in unseen environments), light weight (runs in real time on mobile platforms such as Jetson Nano/TX2), to enable practical multi agent immersive applications in everyday environments.