Fairness refers to ensuring that machine learning models make unbiased and equitable decisions, even as data distribution shifts across different domains over time. Traditional methods focus mainly on optimizing model performance in new domains, often overlooking fairness. The proposed framework, Disentanglement for Counterfactual Fairness-aware Domain Generalization (DCFDG), addresses this by separating domain-specific and sensitive information from the features used for classification. This ensures that model decisions remain fair by utilizing only the relevant semantic information, with fairness regularization included to maintain ethical outcomes across continuously evolving data landscapes.

Posts

Towards Counterfactual Fairness-aware Domain Generalization in Changing Environments

Recognizing domain generalization as a commonplace challenge in machine learning, data distribution might progressively evolve across a continuum of sequential domains in practical scenarios. While current methodologies primarily concentrate on bolstering model effectiveness within these new domains, they tend to neglect issues of fairness throughout the learning process. In response, we propose an innovative framework known as Disentanglement for Counterfactual Fairness-aware Domain Generalization (DCFDG). This approach adeptly removes domain-specific information and sensitive information from the embedded representation of classification features. To scrutinize the intricate interplay between semantic information, domain-specific information, and sensitive attributes, we systematically partition the exogenous factors into four latent variables. By incorporating fairness regularization, we utilize semantic information exclusively for classification purposes. Empirical validation on synthetic and authentic datasets substantiates the efficacy of our approach, demonstrating elevated accuracy levels while ensuring the preservation of fairness amidst the evolving landscape of continuous domains.