Header

Hero Section

Integrated Systems

Our goal is to innovate, design, evaluate and prototype high-performance, energy-efficient intelligent distributed systems for complex and large-scale 5G applications and services.

Tab Dropdown

Department Description

New application needs have always sparked human innovation. A decade ago, cloud computing enabled high-value enterprise services with a global reach and scale (but with several minutes or seconds of delay), and large-scale services like enterprise resource planning (ERP) were a corner-case scenario, often designed as one-off systems. Today, applications like social networks, automated trading and video streaming have made large-scale services the norm rather than the exception.  In the future, advances in 5G networks and an explosion in the number of smart devices, microservices, databases, computing-tiers and end-points in a service will make services so complex that they cannot be tuned or managed by humans. The sheer scale, dynamic nature and concurrency in these services will require them to be intelligent and autonomic. They will need to continuously self-assess, learn and automatically adjust for resource needs, data quality and service reliability

Our current focus is on innovation, design, and prototyping of high-performance and intelligent distributed systems for complex,  large-scale 5G applications and services. The need for increased efficiency and reduced latency between measurement and action is also driving our design of real-time methods for feature extraction, computation and machine learning on multimodal streaming data.

Featured research project background

Featured Research Projects

Self-Optimizing 5G Applications in Network Slices

A decade ago, large-scale services like enterprise resource planning (ERP) were a corner-case scenario, often designed as one-off systems. Today, applications like social networks, automated trading and video streaming have made large-scale services the norm rather than the exception. In the future, advancement in 5G networks and an explosion in the number of smart devices, microservices, databases, computing-tiers and end-points in a service will make services so complex that they cannot be tuned or managed by humans. The sheer scale, dynamic nature and concurrency in these services will require them to be autonomic. They will need to continuously self-assess, learn and automatically adjust for resource needs, data quality and service reliability. Our focus is on systematic models for designing, implementing and managing large-scale services that will evolve to support autonomic behavior. Large services will be partitioned to a greater degree with a greater number of loosely coupled microservices. AI techniques will monitor, analyze and automatically optimize the large ensemble of microservices based on service-specific knowledge, experience and dynamic environment to eliminate barriers like high service-response latencies, poor service quality, reliability and scalability.

READ MORE

Real-Time, Distributed Stream Processing

New application needs have always sparked human innovation. A decade ago, cloud computing enabled high-value enterprise services with a global reach and scale, but with several minutes or seconds of delay. Today, we stream on-demand and time-shifted HD or 4K video from the cloud with delays of hundreds of milliseconds. In the future, the need for increased efficiency and reduced latency between measurement and action will drive the development of real-time methods for feature extraction, computation and machine learning on streaming data. Our focus is on enabling applications to make efficient use of limited computing resources in proximity to users and sensors (rather than resources in the cloud) for AI processing like feature extraction, inferencing and periodic re-training of tiny, dynamic, contextualized AI models. Such edge-cloud processing will avoid incurring 100+-millisecond delays to the cloud and ensure personal privacy of stream data used for training. But it won't be easy to develop. Barriers include the high programming complexity of efficiently using tiers of limited computing resources (in smart devices, edge and the cloud), high processing delays due to limited edge resources and just-in-time adaptations to dynamic environments (changes in the content of data streams, number of users or ambient conditions).

READ MORE

Multimodal Stream Fusion

From canaries sensing danger in coal mines to drones deploying in areas too risky for manned flight, humans continue to engineer novel sensors to overcome the limitations of human senses. Modern-day smart sensors translate the physical world into digital streams by producing a digital representation of the physical quantity being measured. In the future, an exponential growth in smart sensors will result in billions of digital data streams, each describing an increasingly smaller aspect of the physical or digital worlds in greater detail. A rich understanding of these complex worlds, which will be impossible to create using information from any single sensor, will inevitably require the fusion of information in a variety of data streams. Our current focus is on stream fusion to leverage machine learning techniques to bridge radically different data semantics, vastly different data characteristics and the lack of a common frame of reference across different digital streams. Stream fusion will exploit the complementary strengths of different sensing modalities while canceling out their weaknesses, leading to improved sensing capabilities and extremely rich, context-aware data that eliminates the limitations in information, range and accuracy of any individual sensor.

READ MORE