Efficient Compression Method for Roadside LiDAR Data Roadside LiDAR (Light Detection and Ranging) sensors are recently being explored for intelligent transportation systems aiming at safer and faster traffic management and vehicular operations. A key challenge in such systems is to efficiently transfer massive point-cloud data from the roadside LiDAR devices to the edge connected through a 5G network for real-time processing. In this paper, we consider the problem of compressing roadside (i.e. static) LiDAR data in real-time that provides a unique condition unexplored by current methods. Existing point-cloud compression methods assume moving LiDARs (that are mounted on vehicles) and do not exploit spatial consistency across frames over time.To this end, we develop a novel grouped wavelet technique for static roadside LiDAR data compression (i.e. SLiC). Our method compresses LiDAR data both spatially and temporally using a kd-tree data structure based on Haar wavelet coefficients. Experimental results show that SLiC can compress up to 1.9× more effectively than the state-of-the-art compression method can do. Moreover, SLiC is computationally more efficient to achieve 2× improvement in bandwidth usage over the best alternative. Even with this impressive gain in communication and storage efficiency, SLiC retains down-the-pipeline application’s accuracy.
5G, short for “fifth generation,” is the latest and most advanced standard of wireless communication technology. It represents a significant leap forward from the previous generation, 4G (LTE), and it is designed to provide faster and more reliable wireless connectivity, as well as support a wide range of applications and use cases.
Application-specific, Dynamic Reservation of 5G Compute and Network Resources by using Reinforcement Learning 5G services and applications explicitly reserve compute and network resources in today’s complex and dynamic infrastructure of multi-tiered computing and cellular networking to ensure application-specific service quality metrics, and the infrastructure providers charge the 5G services for the resources reserved. A static, one-time reservation of resources at service deployment typically results in extended periods of under-utilization of reserved resources during the lifetime of the service operation. This is due to a plethora of reasons like changes in content from the IoT sensors (for example, change in number of people in the field of view of a camera) or a change in the environmental conditions around the IoT sensors (for example, time of the day, rain or fog can affect data acquisition by sensors). Under-utilization of a specific resource like compute can also be due to temporary inadequate availability of another resource like the network bandwidth in a dynamic 5G infrastructure. We propose a novel Reinforcement Learning-based online method to dynamically adjust an application’s compute and network resource reservations to minimize under-utilization of requested resources, while ensuring acceptable service quality metrics. We observe that a complex application-specific coupling exists between the compute and network usage of an application. Our proposed method learns this coupling during the operation of the service, and dynamically modulates the compute and network resource requests to mimimize under-utilization of reserved resources. Through experimental evaluation using real-world video analytics application, we show that our technique is able to capture complex compute-network coupling relationship in an online manner i.e. while the application is running, and dynamically adapts and saves up to 65% compute and 93% network resources on average (over multiple runs), without significantly impacting application accuracy.
ROMA: Resource Orchestration for Microservices-based 5G Applications With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.
Edge-based fever screening system over private 5G Edge computing and 5G have made it possible to perform analytics closer to the source of data and achieve super-low latency response times, which isn’t possible with centralized cloud deployment. In this paper, we present a novel fever screening system, which uses edge machine learning techniques and leverages private 5G to accurately identify and screen individuals with fever in real-time. Particularly, we present deep-learning based novel techniques for fusion and alignment of cross-spectral visual and thermal data streams at the edge. Our novel Cross-Spectral Generative Adversarial Network (CS-GAN) synthesizes visual images that have the key, representative object level features required to uniquely associate objects across visual and thermal spectrum. Two key features of CS-GAN are a novel, feature-preserving loss function that results in high-quality pairing of corresponding cross-spectral objects, and dual bottleneck residual layers with skip connections (a new, network enhancement) to not only accelerate real-time inference, but to also speed up convergence during model training at the edge. To the best of our knowledge, this is the first technique that leverages 5G networks and limited edge resources to enable real-time feature-level association of objects in visual and thermal streams (30 ms per full HD frame on an Intel Core i7-8650 4-core, 1.9GHz mobile processor). To the best of our knowledge, this is also the first system to achieve real-time operation, which has enabled fever screening of employees and guests in arenas, theme parks, airports and other critical facilities. By leveraging edge computing and 5G, our fever screening system is able to achieve 98.5% accuracy and is able to process ∼ 5X more people when compared to a centralized cloud deployment.
AppSlice: A system for application-centric design of 5G and edge computing applications Applications that use edge computing and 5G to improve response times consume both compute and network resources. However, 5G networks manage only network resources without considering the application’s compute requirements, and container orchestration frameworks manage only compute resources without considering the application’s network requirements. We observe that there is a complex coupling between an application’s compute and network usage, which can be leveraged to improve application performance and resource utilization. We propose a new, declarative abstraction called app slice that jointly considers the application’s compute and network requirements. This abstraction leverages container management systems to manage edge computing resources, and 5G network stacks to manage network resources, while the joint consideration of coupling between compute and network usage is explicitly managed by a new runtime system, which delivers the declarative semantics of the app slice. The runtime system also jointly manages the edge compute and network resource usage automatically across different edge computing environments and 5G networks by using two adaptive algorithms. We implement a complex, real-world, real-time monitoring application using the proposed app slice abstraction, and demonstrate on a private 5G/LTE testbed that the proposed runtime system significantly improves the application performance and resource usage when compared with the case where the coupling between the compute and network resource usage is ignored.
SkyHAUL: A Self-Organizing Gigabit Network In The Sky We design and build SkyHaul, the first large-scale, self-organizing network of Unmanned Aerial Vehicles (UAVs) that are connected using a mm Wave wireless mesh backhaul. While the use of a mmWave backhaul paves the way for a new class of bandwidth-intensive, latency-sensitive cooperative applications (e.g. LTE coverage during disasters), the network of UAVs allows these applications to be executed at operating ranges that are far beyond the line-of-sight distances that limit individual UAVs today.To realize the challenging vision of deploying and maintaining an airborne, mm Wave mesh backhaul that caters to dynamic applications, SkyHaul’s design incorporates various elements: (i) Role-specific UAV operations that simultaneously address application tracking and backhaul connectivity (ii) Novel algorithms to jointly address the problem of deployment (position, yaw of UAVs) and traffic routing across the UAV network, and (iii)A provably optimal solution for fast and safe reconfiguration of UAV backhaul during application dynamics. We evaluate the performance of SkyHaul through both real-world UAV flight operations as well as large scale simulations.
SkyHaul: An Autonomous Gigabit Network Fabric In The Sky We design and build SKYHAUL, the first large scale, autonomous, self organizing network of Unmanned Aerial Vehicles (UAVs) that are connected using a mmWave wireless mesh backhaul. While the use of a mmWave backhaul paves the way for a new class of bandwidth intensive, latency sensitive cooperative applications (e.g., LTE coverage during disasters, surveillance during rescue in challenging terrains), the network of UAVs allows these applications to be executed at operating ranges that are far beyond the line of sight distances that limit individual UAVs today. To realize the challenging vision of deploying and maintaining an airborne mmWave mesh backhaul to cater to dynamic applications, SKYHAUL’s design incorporates various elements: (1) Role specific UAV operations that simultaneously address application tracking and backhaul connectivity (2) Novel algorithms to jointly address the problem of deployment (position, yaw of UAVs) and traffic routing across the UAV network, and (3) A provably optimal solution for fast and safe reconfiguration of UAV backhaul during application dynamics. We implement SKYHAUL on four DJI Matrice 600 Pros to demonstrate its practicality and performance through autonomous flight operations, complemented by large scale simulations.
SkyRAN: A Self-Organizing LTE RAN in the Sky We envision a flexible, dynamic airborne LTE infrastructure built upon Unmanned Autonomous Vehicles (UAVs) that will provide on-demand, on-time, network access, anywhere. In this paper, we design, implement and evaluate SkyRAN, a self-organizing UAV-based LTE RAN (Radio Access Network) that is a key component of this UAV LTE infrastructure network. SkyRAN determines the UAV’s operating position in 3D airspace so as to optimize connectivity to all the UEs on the ground. It realizes this by overcoming various challenges in constructing and maintaining radio environment maps to UEs that guide the UAV’s position in real-time. SkyRAN is designed to be scalable in that it can be quickly deployed to provide efficient connectivity even over a larger area. It is adaptive in that it reacts to changes in the terrain and UE mobility, to maximize LTE coverage performance while minimizing operating overhead. We implement SkyRAN on a DJI Matrice 600 Pro drone and evaluate it over a 90 000 m2 operating area. Our testbed results indicate that SkyRAN can place the UAV in the optimal location with about 30 secs of a measurement flight. On an average, SkyRAN achieves a throughput of 0.9 – 0.95X of optimal, which is about 1.5 – 2X over other popular baseline schemes.
SkyCore: Moving Core to the Edge for Untethered and Reliable UAV-based LTE Networks The advances in unmanned aerial vehicle (UAV) technology have empowered mobile operators to deploy LTE base stations (BSs) on UAVs, and provide on-demand, adaptive connectivity to hotspot venues as well as emergency scenarios. However, today’s evolved packet core (EPC) that orchestrates the LTE RAN faces fundamental limitations in catering to such a challenging, wireless and mobile UAV environment, particularly in the presence of multiple BSs (UAVs). In this work, we argue for and propose an alternate, radical edge EPC design, called SkyCore that pushes the EPC functionality to the extreme edge of the core network – collapses the EPC into a single, light-weight, self-contained entity that is co-located with each of the UAV BS. SkyCore incorporates elements that are designed to address the unique challenges facing such a distributed design in the UAV environment, namely the resource-constraints of UAV platforms, and the distributed management of pronounced UAV and UE mobility. We build and deploy a fully functional version of SkyCore on a two-UAV LTE network and showcase its (i) ability to interoperate with commercial LTE BSs as well as smartphones, (ii) support for both hotspot and standalone multi-UAV deployments, and (iii) superior control and data plane performance compared to other EPC variants in this environment.
4 Independence Way, Suite 200
Princeton, NJ 08540
San Jose Office
2033 Gateway Place, Suite 200
San Jose, CA 95110
NEC Laboratories America, Inc. (NEC Labs) is the US-based center for NEC Corporation’s global network of corporate research laboratories. Our diverse research groups collaborate with industry, academia and governments to provide disruptive solutions to complex problems. A leader in the integration of IT and network technologies with more than 100 years of expertise, NEC provides a combination of products and solutions that cross-utilize the company’s experience and global resources to meet the complex and ever-changing needs of its customers.
Read Our Blog Posts
Apply for a Summer 2024 Internship
Unearthing Nature’s Orchestra – How Fiber Optic Cables Can Hear Cicada Secrets
NEC Labs America Team Heading to NeurIPS23 in New Orleans
Sarper Ozharar Receives Award from Koç University
Meet the NEC Labs America Intern Helping to Make Autonomous Vehicles Safer and More Secure
AI/Fiber-Optic Combo Poised To Improve Telecommunications
Industrial Labs to Drive Disruptive Innovation for the Fourth Industrial Revolution
A New Hope: AI Research is Conquering Today’s Computer Vision Plateau
NEC Labs America’s Time Series Data Research Drives Space Systems Innovation
Next-Generation Computing Finally Sees Light
AI/Fiber-Optic Combo Poised To Improve Telecommunications
Using AI To Safely Put The First Woman On The Moon
Our AI Research Contributing to NASA’s Artemis Space Program
NEC provides AI-based traffic monitoring system with fiber-optic sensing technology for NEXCO CENTRAL
Beyond Communication: Telecom Fiber Networks for Rain Detection and Classification