Artificial Intelligence (AI) refers to the development of computer systems or algorithms that can perform tasks that typically require human intelligence. These tasks include problem-solving, learning, understanding natural language, speech recognition, visual perception, and decision-making. AI aims to create machines capable of mimicking cognitive functions associated with human intelligence, allowing them to adapt, learn from experience, and execute tasks autonomously.

Posts

Shaping the Future with Responsible AI, Collaboration, and Disruption

Chris White, President of NEC Laboratories America, reflects on the lab’s mission to build responsible, human-centered technology—from AI to streetscape innovation—that tackles real-world challenges. In recent keynotes and interviews, he’s emphasized the power of collaboration, the importance of designing AI as a tool that empowers (not replaces), and the discipline required to scale truly disruptive ideas. He’s also shared thoughts on using digital tools for sustainability, such as optimizing global water systems, and the need for cooperative decision-making in complex environments like supply chains. Through it all, he reminds us: real innovation isn’t about flashy tech—it’s about solving meaningful problems, at scale, with intention and integrity.

NEC Labs America Attends the 39th Annual AAAI Conference on Artificial Intelligence #AAAI25

Our NEC Lab America team attended the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25), in Philadelphia, Pennsylvania at the Pennsylvania Convention Center from February 25 to March 4, 2025. The purpose of the AAAI conference series was to promote research in Artificial Intelligence (AI) and foster scientific exchange between researchers, practitioners, scientists, students, and engineers across the entirety of AI and its affiliated disciplines. Our team presented technical papers, led special tracks, delivered talks on key topics, participated in workshops, conducted tutorials, and showcased research in poster sessions. The team greeted visitors at Booth #208 and was there Thursday through Saturday.

Evaluating Cellularity Estimation Methods: Comparing AI Counting with Pathologists’ Visual Estimates

The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurategenetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and eosin (H&E)-stained images has been found to vary between pathologists, making it an important challenge to obtain an accurate TCR. In this study, three pathologists exhaustively labeled all cells in 41 regions from 41 lung cancer cases as either tumor, non-tumor or indistinguishable, thus establishing a “gold standard” TCR. We then compared the accuracy of the TCR estimated by 13 pathologists based on visual assessment and the TCR calculated by an AI model that we have developed. It is a compact and fast model that follows a fully convolutional neural network architecture and produces cell detection maps which can be efficiently post-processed to obtain tumor and non-tumor cell counts from which TCR is calculated. Its raw cell detection accuracy is 92% while its classification accuracy is 84%. The results show that the error between the gold standard TCR and the AI calculation was significantly smaller than that between the gold standard TCR and the pathologist’s visual assessment (p < 0.05). Additionally, the robustness of AI models across institutions is a key issue and we demonstrate that the variation in AI was smallerthan that in the average of pathologists when evaluated by institution. These findings suggest that the accuracy of tumor cellularity assessments in clinical workflows is significantly improved by the introduction of robust AI models, leading to more efficient genetic testing and ultimately to better patient outcomes.

Citizen Science for the Sea with Information Technologies: An Open Platform for Gathering Marine Data and Marine Litter Detection from Leisure Boat Instruments

Data crowdsourcing is an increasingly pervasive and lifestyle-changing technology due to the flywheel effect that results from the interaction between the Internet of Things and Cloud Computing. This paper presents the Citizen Science for the Sea with Information Technologies (C4Sea-IT) framework. It is an open platform for gathering marine data from leisure boat instruments. C4Sea-IT aims to provide a coastal marine data gathering, moving, processing, exchange, and sharing platform using the existing navigation instruments and sensors for today’s leisure and professional vessels. In this work, a use case for the detection and tracking of marine litter is shown. The final goal is weather/ocean forecasts argumentation with Artificial Intelligence prediction models trained with crowdsourced data.

Using AI To Safely Put The First Woman On The Moon

We are helping to safely bring the first woman astronaut to the moon as part of NASA – National Aeronautics and Space Administration’s Artemis Project with our System Invariant Analysis Technology (SIAT). With Lockheed Martin Space’s T-Tauri AI platform, our SIAT analytics engine takes the data from the 150,000 sensors and creates a model incorporating over 22 billion data relationships. The AI model is then analyzed to find any irregularities which could lead to a possible malfunction of any of the spacecraft’s systems.

AI-Driven Applications over Telecom Networks by Distributed Fiber Optic Sensing Technologies

By employing distributed fiber optic sensing (DFOS) technologies, field deployed fiber cables can be utilized as not only communication media for data transmissions but also sensing media for continuously monitoring of the physical phenomenon along the entire route. The fiber can be used to monitor ambient environment along the route covering a wide geographic area. With help of artificial intelligence and machine learning (AI/ML) technologies on information processing, many applications can be developed over telecom networks. We review the recent field results and demonstrate how DFOS can work with existing communication channels and provide holistic view of road traffic monitoring included vehicle counts and average vehicle speeds. A long-term wide-area road traffic monitoring system is an efficient way of gathering seasonal vehicle activities which can be applied in future smart city applications. Additionally, DFOS also offers cable cut prevention functions such as cable self-protection and cable cut threat assessment. Detection and localization of abnormal events and evaluating the threat to the cable are realized to protect telecom facilities.

Employing Telecom Fiber Cables as Sensing Media for Road Traffic Applications

Distributed fiber optic sensing systems (DFOS) allow deployed fiber cables to be sensing media, not only dedicated function of data transmission. The fiber cable can monitor the ambient environment over wide area for many applications. We review recent field trial results, and show how artificial intelligence (AI) can help on the application of road traffic monitoring. The results show that fiber sensing can monitor the periodic traffic changes in hourly, daily, weekly and seasonal.

Coordination of PV Smart Inverters Using Deep Reinforcement Learning for Grid Voltage Regulation

Increasing adoption of solar photovoltaic (PV) presents new challenges to modern power grid due to its variable and intermittent nature. Fluctuating outputs from PV generation can cause the grid violating voltage operation limits. PV smart inverters (SIs) provide a fast-response method to regulate voltage by modulating real and/or reactive power at the connection point. Yet existing local autonomous control scheme of SIs is based on local information without coordination, which can lead to suboptimal performance. In this paper, a deep reinforcement learning (DRL) based algorithm is developed and implemented for coordinating multiple SIs. The reward scheme of the DRL is carefully designed to ensure voltage operation limits of the grid are met with more effective utilization of SI reactive power. The proposed DRL agent for voltage control can learn its policy through interaction with massive offline simulations, and adapts to load and solar variations. The performance of the DRL agent is compared against the local autonomous control on the IEEE 37 node system with thousands of scenarios. The results show a properly trained DRL agent can intelligently coordinate different SIs for maintaining grid voltage within allowable ranges, achieving reduction of PV production curtailment, and decreasing system losses.

Neuron-Network-based Nonlinearity Compensation Algorithm

A simplified, system-agnostic NLC algorithm based on a neuron network is proposed to pre-distort symbols at transmitter side to demonstrate ~0.6dB Q improvement after 2800km SMF transmission using 32Gbaud DP-16QAM.

Evolution from 8QAM live traffic to PCS 64-QAM with Neural-Network Based Nonlinearity Compensation on 11000 km Open Subsea Cable

We report on the evolution of the longest segment of FASTER cable at 11,017 km, with 8QAM transponders at 4b/s/Hz spectral efficiency (SE) in service. With offline testing, 6 b/s/Hz is further demonstrated using probabilistically shaped 64QAM, and a novel, low complexity nonlinearity compensation technique based on generating a black-box model of the transmission by training an artificial neural network, resulting in the largest SE-distance product 66,102 b/s/Hz-km over live-traffic carrying cable.