DataX Allocator: Dynamic resource management for stream analytics at the Edge

Publication Date: 11/29/2022

Event: The 9th International Conference on Internet of Things: Systems, Management and Security (IOTSMS 2022)

Reference: pp. 1-13, 2022

Authors: Priscilla Benedetti, NEC Laboratories America, Inc., Vrije Universiteit Brussel; Giuseppe Coviello, NEC Laboratories America, Inc.; Kunal Rao, NEC Laboratories America, Inc.; Srimat T. Chakradhar, NEC Laboratories America, Inc.

Abstract: Serverless edge computing aims to deploy and manage applications so that developers are unaware of challenges associated with dynamic management, sharing, and maintenance of the edge infrastructure. However, this is a non-trivial task because the resource usage by various edge applications varies based on the content in their input sensor data streams. We present a novel reinforcement-learning (RL) technique to maximize the processing rates of applications by dynamically allocating resources (like CPU cores or memory) to microservices in these applications. We model applications as analytics pipelines consisting of several microservices, and a pipeline’s processing rate directly impacts the accuracy of insights from the application. In our unique problem formulation, the state space or the number of actions of RL is independent of the type of workload in the microservices, the number of microservices in a pipeline, or the number of pipelines. This enables us to learn the RL model only once and use it many times to improve the accuracy of insights for a diverse set of AI/ML engines like action recognition or face recognition and applications with varying microservices. Our experiments with real-world applications, i.e., face recognition and action recognition, show that our approach outperforms other widely-used alternative approaches and achieves up to 2.5X improvement in the overall application processing rate. Furthermore, when we apply our RL model trained on a face recognition pipeline to a different and more complex action recognition pipeline, we obtain a 2X improvement in processing rate, thus showing the versatility and robustness of our RL model to pipeline changes.

Publication Link: https://ieeexplore.ieee.org/document/10061998