Header

Hero Section

Integrated Systems

Projects

Projects

Multimodal Stream Fusion

From canaries sensing danger in coal mines to drones deploying in areas too risky for manned flight, humans continue to engineer novel sensors to overcome the limitations of human senses. Modern-day smart sensors translate the physical world into digital streams by producing a digital representation of the physical quantity being measured. In the future, an exponential growth in smart sensors will result in billions of digital data streams, each describing an increasingly smaller aspect of the physical or digital worlds in greater detail. A rich understanding of these complex worlds, which will be impossible to create using information from any single sensor, will inevitably require the fusion of information in a variety of data streams. Our current focus is on stream fusion to leverage machine learning techniques to bridge radically different data semantics, vastly different data characteristics and the lack of a common frame of reference across different digital streams. Stream fusion will exploit the complementary strengths of different sensing modalities while canceling out their weaknesses, leading to improved sensing capabilities and extremely rich, context-aware data that eliminates the limitations in information, range and accuracy of any individual sensor.

READ MORE

Self-Optimizing 5G Applications in Network Slices

A decade ago, large-scale services like enterprise resource planning (ERP) were a corner-case scenario, often designed as one-off systems. Today, applications like social networks, automated trading and video streaming have made large-scale services the norm rather than the exception. In the future, advancement in 5G networks and an explosion in the number of smart devices, microservices, databases, computing-tiers and end-points in a service will make services so complex that they cannot be tuned or managed by humans. The sheer scale, dynamic nature and concurrency in these services will require them to be autonomic. They will need to continuously self-assess, learn and automatically adjust for resource needs, data quality and service reliability. Our focus is on systematic models for designing, implementing and managing large-scale services that will evolve to support autonomic behavior. Large services will be partitioned to a greater degree with a greater number of loosely coupled microservices. AI techniques will monitor, analyze and automatically optimize the large ensemble of microservices based on service-specific knowledge, experience and dynamic environment to eliminate barriers like high service-response latencies, poor service quality, reliability and scalability.

READ MORE

Real-Time, Distributed Stream Processing

New application needs have always sparked human innovation. A decade ago, cloud computing enabled high-value enterprise services with a global reach and scale, but with several minutes or seconds of delay. Today, we stream on-demand and time-shifted HD or 4K video from the cloud with delays of hundreds of milliseconds. In the future, the need for increased efficiency and reduced latency between measurement and action will drive the development of real-time methods for feature extraction, computation and machine learning on streaming data. Our focus is on enabling applications to make efficient use of limited computing resources in proximity to users and sensors (rather than resources in the cloud) for AI processing like feature extraction, inferencing and periodic re-training of tiny, dynamic, contextualized AI models. Such edge-cloud processing will avoid incurring 100+-millisecond delays to the cloud and ensure personal privacy of stream data used for training. But it won't be easy to develop. Barriers include the high programming complexity of efficiently using tiers of limited computing resources (in smart devices, edge and the cloud), high processing delays due to limited edge resources and just-in-time adaptations to dynamic environments (changes in the content of data streams, number of users or ambient conditions).

READ MORE

Heterogeneous Cluster Computing

Traditional enterprise applications currently run on platforms that are complicated to use and expensive to build and maintain. Although new IT solutions that use dynamically scalable and virtualized clusters of shared computing resources have the potential to dramatically cut costs associated with delivery of enterprise IT services, many obstacles must be overcome. Our main objective is to develop several new technologies to address those challenges, technologies which will help understand, analyze, create and optimize a wide variety of enterprise applications on new, cloud-based, shared, heterogeneous computing architectures. Our current focus is on the following four themes: parallel programming models and run-times, run-times for adapting legacy applications, virtualization and custom accelerators. To understand the impact and efficacy of these new technologies, we will also need to create open, cluster-level enterprise application benchmarks that can drive the design of future heterogeneous computing cluster architectures and parallel programming models. The importance of suitable metrics and figures of merit to evaluate our work cannot be overstated. Using the wrong metrics can give us an incomplete or irrelevant view of the significance of new technologies. For heterogeneous computing clusters, traditional metrics like performance and energy-efficiency must directly relate to cost of delivering IT services. Such new figures of merit will provide the optimization context for new, heterogeneous cluster technologies.

READ MORE

Cosmic - System Software for Efficient Xeon Phi Coprocessor Sharing

COSMIC* is NEC’s system software that enables seamless Xeon Phi coprocessor sharing. It is completely transparent to applications and all other system software components. COSMIC is useful in organizations where several users share one or more Xeon Phi-based servers. It can reduce capital costs by efficiently utilizing fewer servers. The project was featured in InsideHPC's Weekly Slidecast. Check it out here Publications: G. Coviello, NEC Laboratories America, Inc., S. Cadambi, NEC Laboratories America, Inc., S.T. Chakradhar, NEC Laboratories America, Inc., "A Coprocessor Sharing-Aware Scheduler for Xeon Phi-based Compute Clusters," To appear at IPDPS 2014. S. Cadambi, NEC Laboratories America, Inc., G. Coviello, NEC Laboratories America, Inc., C. Li, NEC Laboratories America, Inc., R. Phull, NEC Laboratories America, Inc., K. Rao, NEC Laboratories America, Inc., M. Sankaradas, NEC Laboratories America, Inc., S.T. Chakradhar, NEC Laboratories America, Inc., "COSMIC: Middleware for High Performance and Reliable Multiprocessing on Xeon Phi Coprecessor," The 22nd International ACM Symposium on High Performance Parallel and Distributed Computing (HPDC 2013), pp. 215-226, 2013. (*) COSMIC is pre-commercialization and the name is subject to change.

READ MORE