Model Correction typically refers to the process of refining or adjusting a mathematical or computational model to better align with observed data or improve its predictive accuracy. The goal is to correct or enhance the model’s performance by incorporating new information, addressing inaccuracies, or adapting to changes in the underlying system.

Posts

KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models

KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples. To augment KGs with new knowledge, researchers proposed models for KG Completion (KGC) tasks such as link prediction, i.e., answering (h, p, ?) or (?, p, t) queries. Such models are usually evaluated with averaged metrics on a held-out test set. While useful for tracking progress, averaged single-score metrics cannotreveal what exactly a model has learned — or failed to learn. To address this issue, we propose KGxBoard: an interactive framework for performing fine-grained evaluation on meaningful subsets of the data, each of which tests individual and interpretable capabilities of a KGC model. In our experiments, we highlight the findings that we discovered with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.

Fast Few-shot Debugging for NLU Test Suites

Fast Few-shot Debugging for NLU Test Suites We study few-shot debugging of transformer based natural language understanding models, using recently popularized test suites to not just diagnose but correct a problem. Given a few debugging examples of a certain phenomenon, and a held-out test set of the same phenomenon, we aim to maximize accuracy on the phenomenon at a minimal cost of accuracy on the original test set. We examine several methods that are faster than full epoch retraining. We introduce a new fast method, which samples a few in-danger examples from the original training set. Compared to fast methods using parameter distance constraints or Kullback-Leibler divergence, we achieve superior original accuracy for comparable debugging accuracy.