NEC Corporation is a global leader in IT and network technologies, providing advanced solutions in AI, biometrics, smart cities, and communications. It drives innovation for social value creation and infrastructure resilience. As part of the broader NEC family, NECLA frequently collaborates with NEC Corporation on next-generation networking, AI, and secure computing systems. Our joint efforts span fundamental research to real-world deployments, including innovations in optical networks, data science platforms, and trusted AI frameworks. Please read about our latest news and collaborative publications with NEC Corporation.

Posts

ICeTEA: Mixture of Detectors for Metric-Log Anomaly Detection

Anomaly detection is essential for identifying unusual system behaviors and has wide-ranging applications, from fraud detection to system monitoring. In web servers, anomalies are typically detected using two types of data: metrics (numerical indicators of performance) and logs (records of system events). While correlations between metrics and logs in real-world scenarios highlight the need for joint analysis, which is termed the “metric-log anomaly detection” problem, it has not been fully explored yet due to inherent differences between metrics and logs. In this paper, we propose ICeTEA, a novel system for metric-log anomaly detection that integrates three detectors: a metric-log detector based on a multimodal Variational Autoencoder (VAE), and two individual metric and log detectors. By leveraging the ensemble technique to combine outputs of these detectors, ICeTEA enhances the effectiveness and robustness of metric-log anomaly detection. Case studies demonstrate two key functionalities of ICeTEA: data visualization and rankings of contributions to anomaly scores. Experiments demonstrate that our proposed ICeTEA accurately detects true anomalies while significantly reducing false positives.

On Synthesizing Data for Context Attribution in Question Answering

Question Answering (QA) accounts for a significantportion of LLM usage “in the wild”.However, LLMs sometimes produce false ormisleading responses, also known as hallucinations.Therefore, grounding the generatedanswers in contextually provided information—i.e., providing evidence for the generated text—is paramount for LLMs’ trustworthiness. Providingthis information is the task of context attribution.In this paper, we systematically studyLLM-based approaches for this task, namelywe investigate (i) zero-shot inference, (ii) LLMensembling, and (iii) fine-tuning of small LMson synthetic data generated by larger LLMs.Our key contribution is SYNQA: a novel generativestrategy for synthesizing context attributiondata. Given selected context sentences, anLLM generates QA pairs that are supported bythese sentences. This leverages LLMs’ naturalstrengths in text generation while ensuring clearattribution paths in the synthetic training data.We show that the attribution data synthesizedvia SYNQA is highly effective for fine-tuningsmall LMs for context attribution in differentQA tasks and domains. Finally, with a userstudy, we validate the usefulness of small, efficientLMs (fine-tuned on synthetic data fromSYNQA) in context attribution for QA.

Uncertainty Propagation on LLM Agent

Large language models (LLMs) integrated into multi-step agent systems enable complex decision-making processes across various applications. However, their outputs often lack reliability, making uncertainty estimation crucial. Existing uncertainty estimation methods primarily focus on final-step outputs, which fail to account for cumulative uncertainty over the multi-step decision-making process and the dynamic interactions between agents and their environments. To address these limitations, we propose SAUP (Situation Awareness Uncertainty Propagation), a novel framework that propagates uncertainty through each step of an LLM-based agent’s reasoning process. SAUP incorporates situational awareness by assigning situational weights to each step’s uncertainty during the propagation. Our method, compatible with various one-step uncertainty estimation techniques, provides a comprehensive and accurate uncertainty measure. Extensive experiments on benchmark datasets demonstrate that SAUP significantly outperforms existing state-of-the-art methods, achieving up to 20% improvement in AUROC.

First City-Scale Deployment of DASs with Satellite Imagery and AI for Live Telecom Infrastructure Management

We demonstrate real-time fiber risk assessment and dynamic network routing in live metro networks using deployed DASs, satellite imagery, and large-scale AI, achieving the first significantreduction in fiber failures in four years

Span-based Polarization Sensing in Cables Without Reflectors

Polarization-based, multi-span sensing over a link without reflection-back circuits is demonstrated experimentally. It is shown that distributed reflection from Rayleigh scattering can serveas an alternative to reflectors after spatial averaging of received state-of-polarization

Toward Intelligent and Efficient Optical Networks: Performance Modeling, Co-existence, and Field Trials

Optical transmission networks require intelligent traffic adaptation and efficient spectrum usage. We present scalable machine learning (ML) methods for network performance modeling, andfield trials of distributed fiber sensing and classic optical network traffic coexistence.

GFF-Agnostic Black Box Gain Model for non-Flat Input Spectrum

We present a simple and accurate semi-analytical model predicting the gain of a single-stage erbium-doped fiber amplifier (EDFA) embedded with an unknown gain flattening filter (GFF). Characteristic wavelength-dependent gain coefficients and their scaling laws are extracted with a limited set of simple flat input spectrum measurements at variable temperatures and pump powers. Based on a black box approach, the proposed model provides a precise gain profile estimation of GFF-embedded EDFA for non-flat input spectra in variable temperature and pump power conditions. The accuracy of the presented methodology is validated on an extensive experimental dataset and compared with state-of-the-art gain models based on semi-analytic and solutions.

Phase-noise Tolerant Per-span Phase and Polarization Sensing

Subsea cables include a supervisory system that monitors the health of the amplifier pumps and fiber loss on per span basis. In some of the cables, the monitoring is achieved optically and passively using high-loss loop back paths and wavelength selective reflectors. By sending monitoring pulses through the supervisory channel and comparing the phases and polarizations of the returning pulses reflected by consecutive reflectors, dynamic disturbances affecting individual spans can be monitored on a per span basis. Such per-span phase monitoring techniques require high phase coherence compared to DAS systems since the spans are 10s of kms long compared to typical DAS resolution of meters. A time-frequency spread technique was demonstrated to limit the coherence length requirement, however the limits of its effectiveness was not quantified. In this paper we present a detailed analysis of the trade-off between implementation complexity and the phase noise tolerance for given span length by lab experiments.

Optical Flow Processing for Chirp-Pulse Coherent OTDR

We propose a novel optical flow processing technique for distributed temperature and strain sensing with the chirped-pulse coherent OTDR. Unlike conventional 1-dimensional cross-correlation methods, the technique treats the 2-dimensional waterfall data as sequential video frames, estimating local shifts through optical flow. The weighted least square approach with adaptive window size enables pixel-level optical flow calculation, providing accurate local shifts via accumulative tracks with enhanced spatial resolution. Preliminary experimental results over 20km fiber demonstrate its effectiveness for dynamic temperature and strain sensing, addressing limitations of traditional methods and improving sensing capabilities.

DISC: Dynamic Decomposition Improves LLM Inference Scaling (SSI-FM)

Inference scaling methods often rely on decomposing problems into steps, followed by sampling and selecting the best next steps. However, these steps and their sizes are typically fixed or depend on domain knowledge. We propose dynamic decomposition, a method that adaptively and automatically breaks down solution and reasoning traces into manageable steps during inference. By allocating compute more effectively, particularly by subdividing challenging steps and sampling them more frequently, dynamic decomposition significantly enhances inference efficiency. Experiments on benchmarks such as APPS, MATH, and LiveCodeBench demonstrate that dynamic decomposition outperforms static approaches, including token-level, sentence-level, and single-step decompositions. These findings highlight the potential of dynamic decomposition to improve a wide range of inference scaling techniques.