Variational Methods refer to techniques that optimize quantum models for data compression and representation learning. These methods involve regularizing latent representations of quantum data, enabling efficient data encoding and generation, similar to variational autoencoders (VAEs) in classical machine learning.

Posts

Discrete-Continuous Variational Optimization with Local Gradients

Variational optimization (VO) offers a general approach for handling objectives which may involve discontinuities, or whose gradients are difficult to calculate. By introducing a variational distribution over the parameter space, such objectives are smoothed, and rendered amenable to VO methods. Local gradient information, though, may be available in certain problems, which is neglected by such an approach. We therefore consider a general method for incorporating local information via an augmented VO objective function to accelerate convergence and improve accuracy. We show how our augmented objective can be viewed as an instance of multilevel optimization. Finally, we show our method can train a genetic algorithm simulator, using a recursive Wasserstein distance objective

A Variational Graph Partitioning Approach to Modeling Protein Liquid-liquid Phase Separation

Graph neural networks (GNNs) have emerged as powerful tools for representation learning. Their efficacy depends on their having an optimal underlying graph. In many cases, the most relevant information comes from specific subgraphs. In this work, we introduce a GNN-based framework (graph-partitioned GNN [GP-GNN]) to partition the GNN graph to focus on the most relevant subgraphs. Our approach jointly learns task-dependent graph partitions and node representations, making it particularly effective when critical features reside within initially unidentified subgraphs. Protein liquid-liquid phase separation (LLPS) is a problem especially well-suited to GP-GNNs because intrinsically disordered regions (IDRs) are known to function as protein subdomains in it, playing a key role in the phase separation process. In this study, we demonstrate how GP-GNN accurately predicts LLPS by partitioning protein graphs into task-relevant subgraphs consistent with known IDRs. Our model achieves state-of-the-art accuracy in predicting LLPS and offers biological insights valuable for downstream investigation.

Variational methods for Learning Multilevel Genetic Algorithms using the Kantorovich Monad

Levels of selection and multilevel evolutionary processes are essential concepts in evolutionary theory, and yet there is a lack of common mathematical models for these core ideas. Here, we propose a unified mathematical framework for formulating and optimizing multilevel evolutionary processes and genetic algorithms over arbitrarily many levels based on concepts from category theory and population genetics. We formulate a multilevel version of the Wright-Fisher process using this approach, and we show that this model can be analyzed to clarify key features of multilevel selection. Particularly, we derive an extended multilevel probabilistic version of Price’s Equation via the Kantorovich Monad, and we use this to characterize regimes of parameter space within which selection acts antagonistically or cooperatively across levels. Finally, we show how our framework can provide a unified setting for learning genetic algorithms (GAs), and we show how we can use a Variational Optimization and a multi-level analogue of coalescent analysis to fit multilevel GAs to simulated data.

zeta-QVAE: A Quantum Variational Autoencoder utilizing Regularized Mixed-state Latent Representations

A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources. One approach to enabling tractable quantum models for such datasets involves compressing the original data to manageable dimensions while still representing essential information for downstream analysis. In classical machine learning, variational autoencoders (VAEs) facilitate efficient data compression, representation learning for subsequent tasks, and novel data generation. However, no model has been proposed that exactly captures all of these features for direct application to quantum data on quantum computers. Some existing quantum models for data compression lack regularization of latent representations, thus preventing direct use for generation and control of generalization. Others are hybrid models with only some internal quantum components, impeding direct training on quantum data. To bridge this gap, we present a fully quantum framework, ?-QVAE, which encompasses all the capabilities of classical VAEs and can be directly applied for both classical and quantum data compression. Our model utilizes regularized mixed states to attain optimal latent representations. It accommodates various divergences for reconstruction and regularization. Furthermore, by accommodating mixed states at every stage, it can utilize the full-data density matrix and allow for a “global” training objective. Doing so, in turn, makes efficient optimization possible and has potential implications for private and federated learning. In addition to exploring the theoretical properties of ?-QVAE, we demonstrate its performance on representative genomics and synthetic data. Our results consistently indicate that ?-QVAE exhibits similar or better performance compared to matched classical models.