Graph Neural Networks, Explained: Our Role in the Future of AI

NEC Laboratories America (NECLA) is advancing the frontier of Graph Neural Networks (GNNs), a transformative AI technology that processes complex, interconnected data. Through innovations like PTDNet for robust learning, novel frameworks for explainability, StrGNN for anomaly detection in dynamic graphs, and GERDQ for calibration with out-of-distribution nodes, NECLA is addressing critical challenges in GNN development. These breakthroughs have real-world implications in fields such as cybersecurity, bioinformatics, and recommendation systems, positioning NECLA as a leader in the evolution of graph-based AI.

Shaping the Future with Responsible AI, Collaboration, and Disruption

Chris White, President of NEC Laboratories America, reflects on the lab’s mission to build responsible, human-centered technology—from AI to streetscape innovation—that tackles real-world challenges. In recent keynotes and interviews, he’s emphasized the power of collaboration, the importance of designing AI as a tool that empowers (not replaces), and the discipline required to scale truly disruptive ideas. He’s also shared thoughts on using digital tools for sustainability, such as optimizing global water systems, and the need for cooperative decision-making in complex environments like supply chains. Through it all, he reminds us: real innovation isn’t about flashy tech—it’s about solving meaningful problems, at scale, with intention and integrity.

Top 10 Most Legendary College Pranks of All-Time for April Fools’ Day

At NEC Labs America, we celebrate innovation in all forms—even the brilliantly engineered college prank. From MIT’s police car on the Great Dome to Caltech hacking the Rose Bowl, these legendary stunts showcase next-level planning, stealth, and technical genius. Our Top 10 list honors the creativity behind pranks that made history (and headlines). This April Fools’ Day, we salute the hackers, makers, and mischief-makers who prove that brilliance can be hilarious.

A Smart Sensing Grid for Road Traffic Detection Using Terrestrial Optical Networks and Attention-Enhanced Bi-LSTM

We demonstrate the use of existing terrestrial optical networks as a smart sensing grid, employing a bidirectional long short-term memory (Bi-LSTM) model enhanced with an attention mechanism to detect road vehicles. The main idea of our approach is to deploy a fast, accurate and reliable trained deep learning model in each network element that is constantly monitoring the state of polarization (SOP) of data signals traveling through the optical line system (OLS). Consequently, this deployment approach enables the creation of a sensing smart grid that can continuously monitor wide areas and respond with notifications/alerts for road traffic situations. The model is trained on the synthetic dataset and tested on the real dataset obtained from the deployed metropolitan fiber cable in the city of Turin. Our model is able to achieve 99% accuracy for both synthetic and real datasets.

CAMTUNER: Adaptive Video Analytics Pipelines via Real-time Automated Camera Parameter Tuning

In Video Analytics Pipelines (VAP), Analytics Units (AUs) such as object detection and face recognition operating on remote servers rely heavily on surveillance cameras to capture high-quality video streams to achieve high accuracy. Modern network cameras offer an array of parameters that directly influence video quality. While a few of such parameters, e.g., exposure, focus and white balance, are automatically adjusted by the camera internally, the others are not. We denote such camera parameters as non-automated (NAUTO) parameters. In this work, we first show that in a typical surveillance camera deployment, environmental condition changes can have significant adverse effect on the accuracy of insights from the AUs, but such adverse impact can potentially be mitigated by dynamically adjusting NAUTO camera parameters in response to changes in environmental conditions. Second, since most end-users lack the skill or understanding to appropriately configure these parameters and typically use a fixed parameter setting, we present CAMTUNER, to our knowledge, the first framework that dynamically adapts NAUTO camera parameters to optimize the accuracy of AUs in a VAP in response to adverse changes in environmental conditions. CAMTUNER is based on SARSA reinforcement learning and it incorporates two novel components: a light-weight analytics quality estimator and a virtual camera that drastically speed up offline RL training. Our controlled experiments and real-world VAP deployment show that compared to a VAP using the default camera setting, CAMTUNER enhances VAP accuracy by detecting 15.9% additional persons and 2.6%-4.2% additional cars (without any false positives) in a large enterprise parking lot. CAMTUNER opens up new avenues for elevating video analytics accuracy, transcending mere incremental enhancements achieved through refining deep-learning models.

Optimal Single-User Interactive Beam Alignment with Feedback Delay

Communication in Millimeter wave (mmWave) band relies on narrow beams due to directionality, high path loss, and shadowing. One can use beam alignment (BA) techniques to find and adjust the direction of these narrow beams. In this paper, BA at the base station (BS) is considered, where the BS sends a set of BA packets to scan different angular regions while the user listens to the channel and sends feedback to the BS for each received packet. It is assumed that the packets and feedback received at the user and BS, respectively, can be correctly decoded. Motivated by practical constraints such as propagation delay, a feedback delay for each BA packet is considered. At the end of the BA, the BS allocates a narrow beam to the user including its angle of departure for data transmission and the objective is to maximize the resulting expected beamforming gain. A general framework for studying this problem is proposed based on which a lower bound on the optimal performance as well as an optimality achieving scheme are obtained. Simulation results reveal significant performance improvements over the state-of-the-art BA methods in the presence of feedback delay.

NEC Labs America Attends OFC 2025 in San Francisco

The NEC Labs America Optical Networking and Sensing team is attending the 2025 Optical Fiber Communications Conference and Exhibition (OFC), the premier global event for optical networking and communications. Bringing together over 13,500 attendees from 83+ countries, more than 670 exhibitors, and hundreds of sessions featuring industry leaders, OFC 2025 serves as the central hub for innovation and collaboration in the field. At this year’s conference, NEC Labs America will showcase its cutting-edge research and advancements through multiple presentations, demonstrations, and workshops.

Free-Space Optical Sensing Using Vector Beam Spectra

Vector beams are spatial modes that have spatially inhomogeneous states of polarization. Any light beam is a linear combination of vector beams, the coefficients of which comprise a vector beam “spectrum.” In this work, through numerical calculations, a novel method of free-space optical sensing is demonstrated using vector beam spectra, which are shown to be experimentally measurable via Stokes polarimetry. As proof of concept, vector beam spectra are numerically calculated for various beams and beam obstructions.

400-Gb/s mode division multiplexing-based bidirectional free space optical communication in real-time with commercial transponders

In this work, for the first time, we experimentally demonstrate mode division multiplexing-based bidirectional free space optical communication in real-time using commercial transponders. As proof of concept, via bidirectional pairs of Hermite-Gaussian modes (HG00, HG10, and HG01), using a Telecom Infra Project Phoenix compliant commercial 400G transponder, 400-Gb/s data signals (56-Gbaud, DP-16QAM) are bidirectionally transmitted error free, i.e., with less than 1e-2 pre-FEC BERs, over approximately 1-m of free space

EdgeSync: Efficient Edge-Assisted Video Analytics via Network Contention-Aware Scheduling

With the advancement of 5G, edge-assisted video analytics has become increasingly popular, driven by the technology’s ability to support low-latency, high-bandwidth applications. However, in scenarios where multiple clients competing for network resources, network contention poses a significant challenge. In this paper, we propose a novel scheduling algorithm that intelligently batches and aligns the offloading of multiple video analytics clients to optimize both network and edge server resource utilization while meeting the Service Level Objective (SLO). Experiment with a cellular network testbed shows that our approach successfully processes 93% or more of inference requests from 7 different clients to the edge server while meeting the SLOs, whereas other approaches achieve a lower success rate, ranging from 65% to 85% under the same condition.