AppSlice: A system for application-centric design of 5G and edge computing applications

Applications that use edge computing and 5G to improve response times consume both compute and network resources. However, 5G networks manage only network resources without considering the application’s compute requirements, and container orchestration frameworks manage only compute resources without considering the application’s network requirements. We observe that there is a complex coupling between an application’s compute and network usage, which can be leveraged to improve application performance and resource utilization. We propose a new, declarative abstraction called app slice that jointly considers the application’s compute and network requirements. This abstraction leverages container management systems to manage edge computing resources, and 5G network stacks to manage network resources, while the joint consideration of coupling between compute and network usage is explicitly managed by a new runtime system, which delivers the declarative semantics of the app slice. The runtime system also jointly manages the edge compute and network resource usage automatically across different edge computing environments and 5G networks by using two adaptive algorithms. We implement a complex, real-world, real-time monitoring application using the proposed app slice abstraction, and demonstrate on a private 5G/LTE testbed that the proposed runtime system significantly improves the application performance and resource usage when compared with the case where the coupling between the compute and network resource usage is ignored.

A Silicon Photonic-Electronic Neural Network for Fiber Nonlinearity Compensation

In optical communication systems, fibre nonlinearity is the major obstacle in increasing the transmission capacity. Typically, digital signal processing techniques and hardware are used to deal with optical communication signals, but increasing speed and computational complexity create challenges for such approaches. Highly parallel, ultrafast neural networks using photonic devices have the potential to ease the requirements placed on digital signal processing circuits by processing the optical signals in the analogue domain. Here we report a silicon photonic–electronic neural network for solving fibre nonlinearity compensation in submarine optical-fibre transmission systems. Our approach uses a photonic neural network based on wavelength-division multiplexing built on a silicon photonic platform compatible with complementary metal–oxide–semiconductor technology. We show that the platform can be used to compensate for optical fibre nonlinearities and improve the quality factor of the signal in a 10,080 km submarine fibre communication system. The Q-factor improvement is comparable to that of a software-based neural network implemented on a workstation assisted with a 32-bit graphic processing unit.

DataX: A system for Data eXchange and transformation of streams

The exponential growth in smart sensors and rapid progress in 5G networks is creating a world awash with data streams. However, a key barrier to building performant multi-sensor, distributed stream processing applications is high programming complexity. We propose DataX, a novel platform that improves programmer productivity by enabling easy exchange, transformations, and fusion of data streams. DataX abstraction simplifies the application’s specification and exposes parallelism and dependencies among the application functions (microservices). DataX runtime automatically sets up appropriate data communication mechanisms, enables effortless reuse of microservices and data streams across applications, and leverages serverless computing to transform, fuse, and auto-scale microservices. DataX makes it easy to write, deploy and reliably operate distributed applications at scale. Synthesizing these capabilities into a single platform is substantially more transformative than any available stream processing system.

Guided Acoustic Brillouin Scattering Measurements In Optical Communication Fibers

Guided acoustic Brillouin (GAWBS) noise is measured using a novel, homodyne measurement technique for four commonly used fibers in long-distance optical transmission systems. The measurements are made with single spans and then shown to be consistent with separate multi-span long-distance measurements. The inverse dependence of the GAWBS noise on the fiber effective area is confirmed by comparing different fibers with the effective area varying between 80 µm2 and 150 µm2. The line broadening effect of the coating is observed, and the correlation between the width of the GAWBS peaks to the acoustic mode profile is confirmed. An extensive model of the GAWBS noise in long-distance fibers is presented, including corrections to some commonly repeated mistakes in previous reports. It is established through the model and verified with the measurements that the depolarized scattering caused by TR2m modes contributes twice as much to the optical noise in the orthogonal polarization to the original source, as it does to the noise in parallel polarization. Using this relationship, the polarized and depolarized contributions to the measured GAWBS noise is separated for the first time. As a result, a direct comparison between the theory and the measured GAWBS noise spectrum is shown for the first time with excellent agreement. It is confirmed that the total GAWBS noise can be calculated from fiber parameters under certain assumptions. It is predicted that the level of depolarized GAWBS noise created by the fiber may depend on the polarization diffusion length, and consequently, possible ways to reduce GAWBS noise are proposed. Using the developed theory, dependence of GAWBS noise on the location of the core is calculated to show that multi-core fibers would have a similar level of GAWBS noise no matter where their cores are positioned.

Optical Fiber Sensing Technology Visualizing the Real World via Network Infrastructures – AI technologies for traffic monitoring

Optical fibers have a sensing function that captures environmental changes around the fiber cable. According to the recent technology evolution of optical transmission and AI, the application of the fiber sensing has expanded and visualization accuracy has improved. We have proposed to monitor the traffic flow on the road using the existing optical fiber infrastructure along the road. In this paper, we propose a traffic flow analysis AI algorithm with high environmental resistance and show the evaluation results of traffic monitoring.

F3S: Free Flow Fever Screening

Identification of people with elevated body temperature can reduce or dramatically slow down the spread of infectious diseases like COVID-19. We present a novel fever-screening system, F 3 S, that uses edge machine learning techniques to accurately measure core body temperatures of multiple individuals in a free-flow setting. F 3 S performs real-time sensor fusion of visual camera with thermal camera data streams to detect elevated body temperature, and it has several unique features: (a) visual and thermal streams represent very different modalities, and we dynamically associate semantically-equivalent regions across visual and thermal frames by using a new, dynamic alignment technique that analyzes content and context in real-time, (b) we track people through occlusions, identify the eye (inner canthus), forehead, face and head regions where possible, and provide an accurate temperature reading by using a prioritized refinement algorithm, and (c) we robustly detect elevated body temperature even in the presence of personal protective equipment like masks, or sunglasses or hats, all of which can be affected by hot weather and lead to spurious temperature readings. F 3 S has been deployed at over a dozen large commercial establishments, providing contact-less, free-flow, real-time fever screening for thousands of employees and customers in indoors and outdoor settings.

Multi-Scale One-Class Recurrent Neural Networks for Discrete Event Sequence Anomaly Detection

Discrete event sequences are ubiquitous, such as an ordered event series of process interactions in Information and Communication Technology systems. Recent years have witnessed increasing efforts in detecting anomalies with discrete event sequences. However, it remains an extremely difficult task due to several intrinsic challenges including data imbalance issues, discrete property of the events, and sequential nature of the data. To address these challenges, in this paper, we propose OC4Seq, a multi-scale one-class recurrent neural network for detecting anomalies in discrete event sequences. Specifically, OC4Seq integrates the anomaly detection objective with recurrent neural networks (RNNs) to embed the discrete event sequences into latent spaces, where anomalies can be easily detected. In addition, given that an anomalous sequence could be caused by either individual events, subsequences of events, or the whole sequence, we design a multi-scale RNN framework to capture different levels of sequential patterns simultaneously. We fully implement and evaluate OC4Seq on three real-world system log datasets. The results show that OC4Seq consistently outperforms various representative baselines by a large margin. Moreover, through both quantitative and qualitative analysis, the importance of capturing multi-scale sequential patterns for event anomaly detection is verified. To encourage reproducibility, we make the code and data publicly available.

Domain oriented Language Modeling with Adaptive Hybrid Masking and Optimal Transport Alignment

Motivated by the success of pre-trained language models such as BERT in a broad range of natural language processing (NLP) tasks, recent research efforts have been made for adapting these models for different application domains. Along this line, existing domain-oriented models have primarily followed the vanilla BERT architecture and have a straightforward use of the domain corpus. However, domain-oriented tasks usually require accurate understanding of domain phrases, and such fine-grained phrase-level knowledge is hard to be captured by existing pre-training scheme. Also, the word co-occurrences guided semantic learning of pre-training models can be largely augmented by entity-level association knowledge. But meanwhile, there is a risk of introducing noise due to the lack of ground truth word-level alignment. To address the issues, we provide a generalized domain-oriented approach, which leverages auxiliary domain knowledge to improve the existing pre-training framework from two aspects. First, to preserve phrase knowledge effectively, we build a domain phrase pool as auxiliary knowledge, meanwhile we introduce Adaptive Hybrid Masked Model to incorporate such knowledge. It integrates two learning modes, word learning and phrase learning, and allows them to switch between each other. Second, we introduce Cross Entity Alignment to leverage entity association as weak supervision to augment the semantic learning of pre-trained models. To alleviate the potential noise in this process, we introduce an interpretable Optimal Transport based approach to guide alignment learning. Experiments on four domain-oriented tasks demonstrate the superiority of our framework.

SIGL: Securing Software Installations Through Deep Graph Learning

Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new software versions, as well as application-agnostic meta-models that encompass a wide range of applications and installers.

Overcoming Poor Word Embeddings with Word Definitions

Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are more challenging for natural language inference models. Then we explore how a model could learn to use definitions, provided in natural text, to overcome this handicap. Our model’s understanding of a definition is usually weaker than a well-modeled word embedding, but it recovers most of the performance gap from using a completely untrained word.