DeepAI AI Chat
Log In Sign Up

Using Synthetic Images To Uncover Population Biases In Facial Landmarks Detection

by   Ran Shadmi, et al.

In order to analyze a trained model performance and identify its weak spots, one has to set aside a portion of the data for testing. The test set has to be large enough to detect statistically significant biases with respect to all the relevant sub-groups in the target population. This requirement may be difficult to satisfy, especially in data-hungry applications. We propose to overcome this difficulty by generating synthetic test set. We use the face landmarks detection task to validate our proposal by showing that all the biases observed on real datasets are also seen on a carefully designed synthetic dataset. This shows that synthetic test sets can efficiently detect a model's weak spots and overcome limitations of real test set in terms of quantity and/or diversity.


Deep Domain Adaptation Based Video Smoke Detection using Synthetic Smoke Images

In this paper, a deep domain adaptation based method for video smoke det...

When Handcrafted Features and Deep Features Meet Mismatched Training and Test Sets for Deepfake Detection

The accelerated growth in synthetic visual media generation and manipula...

SuperSim: a test set for word similarity and relatedness in Swedish

Language models are notoriously difficult to evaluate. We release SuperS...

LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images

We propose an automated algorithm to stress-test a trained visual model ...

Real-Time and Robust 3D Object Detection Within Road-Side LiDARs Using Domain Adaptation

This work aims to address the challenges in domain adaptation of 3D obje...

Building an Evaluation Scale using Item Response Theory

Evaluation of NLP methods requires testing against a previously vetted g...