Changing Environments refer to the evolving nature of data distribution across sequential domains in real-world scenarios. As machine learning models are exposed to new domains over time, the underlying data characteristics shift, impacting the model’s performance and fairness. These dynamic environments pose challenges for maintaining model generalization, as they require the system to adapt to new conditions while preserving fairness across various domains. The proposed framework, DCFDG, addresses this challenge by disentangling domain-specific and sensitive information, ensuring that models remain accurate and fair in the face of continuous changes.

Posts

Towards Counterfactual Fairness-aware Domain Generalization in Changing Environments

Recognizing domain generalization as a commonplace challenge in machine learning, data distribution might progressively evolve across a continuum of sequential domains in practical scenarios. While current methodologies primarily concentrate on bolstering model effectiveness within these new domains, they tend to neglect issues of fairness throughout the learning process. In response, we propose an innovative framework known as Disentanglement for Counterfactual Fairness-aware Domain Generalization (DCFDG). This approach adeptly removes domain-specific information and sensitive information from the embedded representation of classification features. To scrutinize the intricate interplay between semantic information, domain-specific information, and sensitive attributes, we systematically partition the exogenous factors into four latent variables. By incorporating fairness regularization, we utilize semantic information exclusively for classification purposes. Empirical validation on synthetic and authentic datasets substantiates the efficacy of our approach, demonstrating elevated accuracy levels while ensuring the preservation of fairness amidst the evolving landscape of continuous domains.