HumanAL: Calibrating Human Matching Beyond a Single Task

05/06/2022
by   Roee Shraga, et al.
0

This work offers a novel view on the use of human input as labels, acknowledging that humans may err. We build a behavioral profile for human annotators which is used as a feature representation of the provided input. We show that by utilizing black-box machine learning, we can take into account human behavior and calibrate their input to improve the labeling quality. To support our claims and provide a proof-of-concept, we experiment with three different matching tasks, namely, schema matching, entity matching and text matching. Our empirical evaluation suggests that the method can improve the quality of gathered labels in multiple settings including cross-domain (across different matching tasks).

READ FULL TEXT
research
09/15/2021

PoWareMatch: a Quality-aware Deep Learning Approach to Improve Human Schema Matching

Schema matching is a core task of any data integration process. Being in...
research
08/03/2023

LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance

Schema matching is a core data integration task, focusing on identifying...
research
12/02/2020

Learning to Characterize Matching Experts

Matching is a task at the heart of any data integration process, aimed a...
research
03/28/2022

Face Verification Bypass

Face verification systems aim to validate the claimed identity using fea...
research
11/01/2022

Entity Matching by Pool-based Active Learning

The goal of entity matching is to find the corresponding records represe...
research
02/25/2023

Generalization Bounds for Set-to-Set Matching with Negative Sampling

The problem of matching two sets of multiple elements, namely set-to-set...
research
02/16/2023

Assisting Human Decisions in Document Matching

Many practical applications, ranging from paper-reviewer assignment in p...

Please sign up or login with your details

Forgot password? Click here to reset