DeepAI AI Chat
Log In Sign Up

Explaining Dataset Changes for Semantic Data Versioning with Explain-Da-V (Technical Report)

by   Roee Shraga, et al.

In multi-user environments in which data science and analysis is collaborative, multiple versions of the same datasets are generated. While managing and storing data versions has received some attention in the research literature, the semantic nature of such changes has remained under-explored. In this work, we introduce , a framework aiming to explain changes between two given dataset versions. generates explanations that use data transformations to explain changes. We further introduce a set of measures that evaluate the validity, generalizability, and explainability of these explanations. We empirically show, using an adapted existing benchmark and a newly created benchmark, that generates better explanations than existing data transformation synthesis methods.


Global Explanation of Tree-Ensembles Models Based on Item Response Theory

Explainable Artificial Intelligence - XAI is aimed at studying and devel...

Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations

Recent work on explainable NLP has shown that few-shot prompting can ena...

OCTET: Object-aware Counterfactual Explanations

Nowadays, deep vision models are being widely deployed in safety-critica...

Does Dataset Complexity Matters for Model Explainers?

Strategies based on Explainable Artificial Intelligence - XAI have emerg...

FEDEX: An Explainability Framework for Data Exploration Steps

When exploring a new dataset, Data Scientists often apply analysis queri...

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...

Don't Tell Me The Cybersecurity Moon Is Shining... (Cybersecurity Show And Tell)

"Show, don't tell" has become the literary commandment for any writer. I...