Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive Attributes
Ensuring trusted artificial intelligence (AI) in the real world is an critical challenge. A still largely unexplored task is the determination of the major factors within the real world that affect the behavior and robustness of a given AI module (e.g. weather or illumination conditions). Specifically, here we seek to discover the factors that cause AI systems to fail, and to mitigate their influence. The identification of these factors usually heavily relies on the availability of data that is diverse enough to cover numerous combinations of these factors, but the exhaustive collection of this data is onerous and sometimes impossible in complex environments. This paper investigates methods that discover and mitigate the effects of semantic sensitive factors within a given dataset. We also here generalize the definition of fairness, which normally only addresses socially relevant factors, and widen it to deal with – more broadly – the desensitization of AI systems with regard to all possible aspects of variation in the domain. The proposed methods which discover these major factors reduce the potentially onerous demands of collecting a sufficiently diverse dataset. In experiments using road sign (GTSRB) and facial imagery (CelebA) datasets, we show the promise of these new methods and show that they outperform state of the art approaches.
READ FULL TEXT