Task Representations refer to the conceptual or computational frameworks used to describe and understand tasks within various domains, such as artificial intelligence, cognitive psychology, and human-computer interaction. These representations capture the essential components of a task, including its goals, constraints, procedures, and relationships among different elements.

In the context of artificial intelligence and machine learning, task representations play a crucial role in defining the objectives and constraints of learning algorithms. They define the input-output mappings that algorithms are designed to learn and guide the process of feature selection, model design, and optimization. Task representations can take various forms, including symbolic rules, structured data, graphs, or statistical distributions, depending on the nature of the task and the available information.


Hierarchical Gaussian Mixture based Task Generative Model for Robust Meta-Learning

Meta-learning enables quick adaptation of machine learning models to new tasks with limited data. While tasks could come from varying distributions in reality, most of the existing meta-learning methods consider both training and testing tasks as from the same uni-component distribution, overlooking two critical needs of a practical solution: (1) the various sources of tasks may compose a multi-component mixture distribution, and (2) novel tasks may come from a distribution that is unseen during meta-training. In this paper, we demonstrate these two challenges can be solved jointly by modeling the density of task instances. We develop a meta training framework underlain by a novel Hierarchical Gaussian Mixture based Task Generative Model (HTGM). HTGM extends the widely used empirical process of sampling tasks to a theoretical model, which learns task embeddings, fits the mixture distribution of tasks, and enables density-based scoring of novel tasks. The framework is agnostic to the encoder and scales well with large backbone networks. The model parameters are learned end-to-end by maximum likelihood estimation via an Expectation-Maximization (EM) algorithm. Extensive experiments on benchmark datasets indicate the effectiveness of our method for both sample classification and novel task detection.