Deep Learning is a subfield of artificial intelligence (AI) and machine learning (ML) that focuses on the development and application of neural networks, which are computational models inspired by the structure and function of the human brain. Deep learning algorithms aim to learn and represent data in increasingly abstract and hierarchical ways, allowing them to automatically discover patterns, features, and representations from raw input data.

Posts

Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective

Zero-shot learning (ZSL) aims to recognize instances of unseen classes solely based on the semantic descriptions of the classes. Existing algorithms usually formulate it as a semantic-visual correspondence problem, by learning mappings from one feature space to the other. Despite being reasonable, previous approaches essentially discard the highly precious discriminative power of visual features in an implicit way, and thus produce undesirable results. We instead reformulate ZSL as a conditioned visual classification problem, i.e., classifying visual features based on the classifiers learned from the semantic descriptions. With this reformulation, we develop algorithms targeting various ZSL settings: For the conventional setting, we propose to train a deep neural network that directly generates visual feature classifiers from the semantic attributes with an episode-based training scheme; For the generalized setting, we concatenate the learned highly discriminative classifiers for seen classes and the generated classifiers for unseen classes to classify visual features of all classes; For the transductive setting, we exploit unlabeled data to effectively calibrate the classifier generator using a novel learning-without-forgetting self-training mechanism and guide the process by a robust generalized cross-entropy loss. Extensive experiments show that our proposed algorithms significantly outperform state-of-the-art methods by large margins on most benchmark datasets in all the ZSL settings.

VeCharge: Intelligent Energy Management for Electric Vehicle charging

2018’s 1.2 million North American charging ports will grow ten times to over 12.6 million by 2027, according to Navigant, which could overwhelm the nation’s grids. DC Fast charging requires grid upgrade to supply the new charging demand. However, since the utilization ratio of those charging station is currently low. Demand charge cost can reach up to 90% of the total bill. Combining fast charging with energy storage can mitigate grid impacts and reduce demand charges. EV specific pricing is proposed for EV charging by many energy suppliers. Without managed charging, EV owner will lose the benefit of lowering charging cost by avoiding peak hour charging or missing the period when renewable energy generation is abundant.

Data-Driven Day-Ahead PV Estimation Using Hybrid Deep Learning

Ongoing smart grid activities and associated automation resulted in rich set of data. These data can be utilized for monitoring and estimation of real time photovoltaic (PV) generation. Inherent variability in PV and related impact on power systems is a challenging problem. Improving the accuracy of PV generation estimation is beneficial for both the PV owners and the grid operators. Recently, deep learning algorithms possible by the availability of data have shown its advantages for time series estimation; however, its application on PV generation estimation is still in the early stage. In this paper, a hybrid estimation model with a combination of long-short-term-memory network (LSTM) and persistence model (PM) is developed to provide day-ahead PV estimation at 15-minute time interval with high accuracy and robustness. Simulation results show the superior performance of the proposed method over existing methods for most of the test c

Heterogeneous Graph Matching Networks for Unknown Malware Detection

Information systems have widely been the target of malware attacks. Traditional signature-based malicious program detection algorithms can only detect known malware and are prone to evasion techniques such as binary obfuscation, while behavior-based approaches highly rely on the malware training samples and incur prohibitively high training cost. To address the limitations of existing techniques, we propose MatchGNet, a heterogeneous Graph Matching Network model to learn the graph representation and similarity metric simultaneously based on the invariant graph modeling of the program’s execution behaviors. We conduct a systematic evaluation of our model and show that it is accurate in detecting malicious program behavior and can help detect malware attacks with less false positives. MatchGNet outperforms the state-of-the-art algorithms in malware detection by generating 50% less false positives while keeping zero false negatives.

Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis

Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning. In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a conditional GAN model with a novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed conditioning scheme with a deep GAN architecture, TFGAN generates high quality videos from text on challenging real-world video datasets. In addition, we construct a synthetic dataset of text-conditioned moving shapes to systematically evaluate our conditioning scheme. Extensive experiments demonstrate that TFGAN significantly outperforms existing approaches, and can also generate videos of novel categories not seen during training.

Deep Supervision with Intermediate Concepts (IEEE)

Read Deep Supervision with Intermediate Concepts (IEEE). Recent data-driven approaches to scene interpretation predominantly pose inference as an end-to-end black-box mapping, commonly performed by a Convolutional Neural Network (CNN). However, decades of work on perceptual organization in both human and machine vision suggest that there are often intermediate representations that are intrinsic to an inference task, and which provide essential structure to improve generalization. In this work, we explore an approach for injecting prior domain structure into neural network training by supervising hidden layers of a CNN with intermediate concepts that normally are not observed in practice. We formulate a probabilistic framework which formalizes these notions and predicts improved generalization via this deep supervision method. One advantage of this approach is that we are able to train only from synthetic CAD renderings of cluttered scenes, where concept values can be extracted, but apply the results to real images. Our implementation achieves the state-of-the-art performance of 2D/3D keypoint localization and image classification on real image benchmarks including KITTI, PASCALVOC, PASCAL3D+, IKEA, and CIFAR100. We provide additional evidence that our approach outperforms alternative forms of supervision, such as multi-task networks.

Deep Co-Clustering

Co-clustering partitions instances and features simultaneously by leveraging the duality between them, and it often yields impressive performance improvement over traditional clustering algorithms. The recent development in learning deep representations has demonstrated the advantage in extracting effective features. However, the research on leveraging deep learning frameworks for co-clustering is limited for two reasons: 1) current deep clustering approaches usually decouple feature learning and cluster assignment as two separate steps, which cannot yield the task-specific feature representation; 2) existing deep clustering approaches cannot learn representations for instances and features simultaneously. In this paper, we propose a deep learning model for co-clustering called DeepCC. DeepCC utilizes the deep autoencoder for dimension reduction, and employs a variant of Gaussian Mixture Model (GMM) to infer the cluster assignments. A mutual information loss is proposed to bridge the training of instances and features. DeepCC jointly optimizes the parameters of the deep autoencoder and the mixture model in an end-to-end fashion on both the instance and the feature spaces, which can help the deep autoencoder escape from local optima and the mixture model circumvent the Expectation-Maximization (EM) algorithm. To the best of our knowledge, DeepCC is the first deep learning model for co-clustering. Experimental results on various dataseis demonstrate the effectiveness of DeepCC.

Battery Degradation Temporal Modeling Using LSTM Networks

Accurate modeling of battery capacity degradation is an important component for both battery manufacturers and energy management systems. In this paper, we develop a battery degradation model using deep learning algorithms. The model is trained with the real data collected from battery storage solutions installed and operated for behind-the-meter customers. In the dataset, battery operation data are recorded at a small scale (five minutes) and battery capacity is measured at every six months. In order to improve the training performance, we apply two preprocessing techniques, namely subsampling and feature extraction on operation data, and also interpolating between capacity measurements at times for which battery operation features are available. We integrate both cyclic and calendar aging processes in a unified framework by extracting the corresponding features from operation data. The proposed model uses LSTM units followed by a fully-connected network to process weekly battery operation features and predicts the capacity degradation. The experimental results show that our method can accurately predict the capacity fading and significantly outperforms baseline models including persistence and autoregressive (AR) models.

Conditioning Neural Networks: A Case Study of Electrical Load Forecasting

Machine learning tasks typically involve minimizing a loss function that measures the distance of the model output and the ground-truth. In some applications, in addition to the usual loss function, the output must also satisfy certain requirements for further processing. We call such requirements model conditioning. We investigate cases where the conditioner is not differentiable or cannot be expressed in closed form and, hence, cannot be directly included in the loss function of the machine learning model. We propose to replace the conditioner with a learned dummy model which is applied on the output of the main model. The entire model, composed of the main and dummy models, is trained end-to-end. Throughout training, the dummy model learns to approximate the conditioner and, thus, forces the main model to generate outputs that satisfy the specified requirements. We demonstrate our approach on a use-case of demand charge-aware electricity load forecasting. We show that jointly minimizing the error in forecast load and its demand charge threshold results in significant improvement to existing load forecast methods.

Scalable Deep k-Subspace Clustering

Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.