Meet The Disruptors: NEC’s Chris White On The Five Things You Need To Shake Up Your Industry | Authority Magazine

Read this fantastic interview with our President, Christopher White, with Authority Magazine, as he shares five things you need to shake up your industry based on his experience pushing the envelope in chemistry, computer science, quantum computing, and artificial intelligence. Chris leads our team to conduct disruptive research rather than just incremental research. Going for the 10x rather than the 10%.

Chris White Interviewed By Mike Vizard on Techstrong.AI

In this excellent Techstrong.ai videocast, Michael Vizard interviews our Christopher White, President of NEC Labs America, about #AI and its future. They discuss generative AI, its current hype, its potential impact on content creation and the augmentation of human abilities. Chris emphasizes that generative AI systems are not “thinking machines” but tools to enhance human capabilities. Chris highlights the need for a better fundamental understanding of AI systems and the shift toward “invisible AI” that optimizes and predicts individual needs.

Meet the NEC Labs America Intern Helping to Make Autonomous Vehicles Safer and More Secure

There’s much more to autonomous vehicle security than locking a car door. This summer, Kaiyuan Zhang, a 3rd-year computer science Ph.D. student at Purdue University, joined NEC Labs America’s popular intern program to help advance research around autonomous vehicle security. Each year, nearly 50 Ph.D. candidates join NEC Labs America’s innovative program, which centers on a collaborative environment where interns work directly with senior researchers and potential end-user customers.

AI/Fiber-Optic Combo Poised To Improve Telecommunications

Existing underground fiber-optic telecommunications cable networks that can be accessed through street manholes are helping a team at NEC Labs America improve wireless communications systems and the Internet of Things (IoT). “Hundreds of millions of fiber-optic cables are already there for communications purposes,” says Shaobo Han, a researcher at NEC Labs America who focuses on the design and development of machine learning and signal-processing techniques for real-world sensing applications. “We’re turning it all into a ‘thinking’ device, using the same cable that’s already there.”

Industrial Labs to Drive Disruptive Innovation for the Fourth Industrial Revolution

While the previous generation of industrial progress brought us new capabilities, efficiencies, and even delight through digital transformation, we’re entering a new era of innovation, opportunity, and disruption: the Fourth Industrial Revolution. What is the Fourth Industrial Revolution? According to the visionary who coined the term – Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, it is “…characterized by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies, and industries, and even challenging ideas about what it means to be human.”

Deep Video Codec Control

Deep Video Codec Control Lossy video compression is commonly used when transmitting and storing video data. Unified video codecs (e.g., H.264 or H.265) remain the emph(Unknown sysvar: (de facto)) standard, despite the availability of advanced (neural) compression approaches. Transmitting videos in the face of dynamic network bandwidth conditions requires video codecs to adapt to vastly different compression strengths. Rate control modules augment the codec’s compression such that bandwidth constraints are satisfied and video distortion is minimized. While, both standard video codes and their rate control modules are developed to minimize video distortion w.r.t. human quality assessment, preserving the downstream performance of deep vision models is not considered. In this paper, we present the first end-to-end learnable deep video codec control considering both bandwidth constraints and downstream vision performance, while not breaking existing standardization. We demonstrate for two common vision tasks (semantic segmentation and optical flow estimation) and on two different datasets that our deep codec control better preserves downstream performance than using 2-pass average bit rate control while meeting dynamic bandwidth constraints and adhering to standardizations.

Enabling Cooperative Hybrid Beamforming in TDD-based Distributed MIMO Systems

Enabling Cooperative Hybrid Beamforming in TDD-based Distributed MIMO Systems Distributed massive MIMO networks are envisioned to realize cooperative multi-point transmission in next-generation wireless systems. For efficient cooperative hybrid beamforming, the cluster of access points (APs) needs to obtain precise estimates of the uplink channel to perform reliable downlink precoding. However, due to the radio frequency (RF) impairments between the transceivers at the two en-points of the wireless channel, full channel reciprocity does not hold which results in performance degradation in the cooperative hybrid beamforming (CHBF) unless a suitable reciprocity calibration mechanism is in place. We propose a two-step approach to calibrate any two hybrid nodes in the distributed MIMO system. We then present and utilize the novel concept of reciprocal tandem to propose a low-complexity approach for jointly calibrating the cluster of APs and estimating the downlink channel. Finally, we validate our calibration technique’s effectiveness through numerical simulation.

Blind Cyclic Prefix-based CFO Estimation in MIMO-OFDM Systems

Blind Cyclic Prefix-based CFO Estimation in MIMO-OFDM Systems Low-complexity estimation and correction of carrier frequency offset (CFO) are essential in orthogonal frequency division multiplexing (OFDM). In this paper, we propose a low-overhead blind CFO estimation technique based on cyclic prefix (CP), in multi-input multi-output (MIMO)-OFDM systems. We propose to use antenna diversity for CFO estimation. Given that the RF chains for all antenna elements at a communication node share the same clock, the carrier frequency offset (CFO) between two points may be estimated by using the combination of the received signal at all antennas. We improve our method by combining the antenna diversity with time diversity by considering the CP for multiple OFDM symbols. We provide a closed-form expression for CFO estimation and present algorithms that can considerably improve the CFO estimation performance at the expense of a linear increase in computational complexity. We validate the effectiveness of our estimation scheme via extensive numerical analysis.

AutoTCL: Automated Time Series Contrastive Learning with Adaptive Augmentations

AutoTCL: Automated Time Series Contrastive Learning with Adaptive Augmentations Modern techniques like contrastive learning have been effectively used in many areas, including computer vision, natural language processing, and graph-structured data. Creating positive examples that assist the model in learning robust and discriminative representations is a crucial stage in contrastive learning approaches. Usually, preset human intuition directs the selection of relevant data augmentations. Due to patterns that are easily recognized by humans, this rule of thumb works well in the vision and language domains. However, it is impractical to visually inspect the temporal structures in time series. The diversity of time series augmentations at both the dataset and instance levels makes it difficult to choose meaningful augmentations on the fly. Thus, although prevalent, contrastive learning with data augmentation has been less studied in the time series domain. In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format. We then propose a parameterized augmentation method, AutoTCL, which can be adaptively employed to support time series representation learning. The proposed approach is encoder-agnostic, allowing it to be seamlessly integrated with different backbone encoders. Experiments on benchmark datasets demonstrate the highly competitive results of our method, with an average 10.3% reduction in MSE and 7.0% in MAE over the leading baselines.

Semantic Multi-Resolution Communications

Semantic Multi-Resolution Communications Deep learning based joint source-channel coding (JSCC) has demonstrated significant advancements in data reconstruction compared to separate source-channel coding (SSCC). This superiority arises from the suboptimality of SSCC when dealing with finite block-length data. Moreover, SSCC falls short in reconstructing data in a multi-user and/or multi-resolution fashion, as it only tries to satisfy the worst channel and/or the highest quality data. To overcome these limitations, we propose a novel deep learning multi-resolution JSCC framework inspired by the concept of multi-task learning (MTL). This proposed framework excels at encoding data for different resolutions through hierarchical layers and effectively decodes it by leveraging both current and past layers of encoded data. Moreover, this framework holds great potential for semantic communication, where the objective extends beyond data reconstruction to preserving specific semantic attributes throughout the communication process. These semantic features could be crucial elements such as class labels, essential for classification tasks, or other key attributes that require preservation. Within this framework, each level of encoded data can be carefully designed to retain specific data semantics. As a result, the precision of a semantic classifier can be progressively enhanced across successive layers, emphasizing the preservation of targeted semantics throughout the encoding and decoding stages. We conduct experiments on MNIST and CIFAR10 dataset. The experiment with both datasets illustrates that our proposed method is capable of surpassing the SSCC method in reconstructing data with different resolutions, enabling the extraction of semantic features with heightened confidence in successive layers. This capability is particularly advantageous for prioritizing and preserving more crucial semantic features within the datasets.