Few-Shot refers to few-shot learning, a machine learning paradigm that involves training a model to recognize or perform a task with only a very small number of examples per class. In traditional machine learning approaches, models often require a large amount of labeled data to generalize well to new, unseen examples. However, in few-shot learning, the goal is to enable the model to learn from a limited set of examples, sometimes as few as one or a few examples per class.

Posts

KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models

KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples. To augment KGs with new knowledge, researchers proposed models for KG Completion (KGC) tasks such as link prediction, i.e., answering (h, p, ?) or (?, p, t) queries. Such models are usually evaluated with averaged metrics on a held-out test set. While useful for tracking progress, averaged single-score metrics cannotreveal what exactly a model has learned — or failed to learn. To address this issue, we propose KGxBoard: an interactive framework for performing fine-grained evaluation on meaningful subsets of the data, each of which tests individual and interpretable capabilities of a KGC model. In our experiments, we highlight the findings that we discovered with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.

Fast Few-shot Debugging for NLU Test Suites

Fast Few-shot Debugging for NLU Test Suites We study few-shot debugging of transformer based natural language understanding models, using recently popularized test suites to not just diagnose but correct a problem. Given a few debugging examples of a certain phenomenon, and a held-out test set of the same phenomenon, we aim to maximize accuracy on the phenomenon at a minimal cost of accuracy on the original test set. We examine several methods that are faster than full epoch retraining. We introduce a new fast method, which samples a few in-danger examples from the original training set. Compared to fast methods using parameter distance constraints or Kullback-Leibler divergence, we achieve superior original accuracy for comparable debugging accuracy.