Testing Framework for Black-box AI Models

02/11/2021
by   Aniya Aggarwal, et al.
16

With widespread adoption of AI models for important decision making, ensuring reliability of such models remains an important challenge. In this paper, we present an end-to-end generic framework for testing AI Models which performs automated test generation for different modalities such as text, tabular, and time-series data and across various properties such as accuracy, fairness, and robustness. Our tool has been used for testing industrial AI models and was very effective to uncover issues present in those models. Demo video link: https://youtu.be/984UCU17YZI

READ FULL TEXT
research
10/07/2021

Automated Testing of AI Models

The last decade has seen tremendous progress in AI technology and applic...
research
11/03/2021

Data Synthesis for Testing Black-Box Machine Learning Models

The increasing usage of machine learning models raises the question of t...
research
05/04/2023

"Oops, Did I Just Say That?" Testing and Repairing Unethical Suggestions of Large Language Models with Suggest-Critique-Reflect Process

As the popularity of large language models (LLMs) soars across various a...
research
08/24/2023

Automated Test Generation for Medical Rules Web Services: A Case Study at the Cancer Registry of Norway

The Cancer Registry of Norway (CRN) collects, curates, and manages data ...
research
11/20/2017

Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions

Artificial Intelligence (AI) has been used extensively in automatic deci...
research
10/21/2018

Challenge AI Mind: A Crowd System for Proactive AI Testing

Artificial Intelligence (AI) has burrowed into our lives in various aspe...
research
03/14/2022

Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing

Fairness of deepfake detectors in the presence of anomalies are not well...

Please sign up or login with your details

Forgot password? Click here to reset