Interpreting Expert Annotation Differences in Animal Behavior

06/11/2021
by   Megan Tjandrasuwita, et al.
4

Hand-annotated data can vary due to factors such as subjective differences, intra-rater variability, and differing annotator expertise. We study annotations from different experts who labelled the same behavior classes on a set of animal behavior videos, and observe a variation in annotation styles. We propose a new method using program synthesis to help interpret annotation differences for behavior analysis. Our model selects relevant trajectory features and learns a temporal filter as part of a program, which corresponds to estimated importance an annotator places on that feature at each timestamp. Our experiments on a dataset from behavioral neuroscience demonstrate that compared to baseline approaches, our method is more accurate at capturing annotator labels and learns interpretable temporal filters. We believe that our method can lead to greater reproducibility of behavior annotations used in scientific studies. We plan to release our code.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2022

The MABe22 Benchmarks for Representation Learning of Multi-Agent Behavior

Real-world behavior is often shaped by complex interactions between mult...
research
04/06/2021

The Multi-Agent Behavior Dataset: Mouse Dyadic Social Interactions

Multi-agent behavior modeling aims to understand the interactions that o...
research
11/30/2021

Automatic Synthesis of Diverse Weak Supervision Sources for Behavior Analysis

Obtaining annotations for large training sets is expensive, especially i...
research
11/27/2020

Task Programming: Learning Data Efficient Behavior Representations

Specialized domain knowledge is often necessary to accurately annotate t...
research
10/24/2017

Computational Social Scientist Beware: Simpson's Paradox in Behavioral Data

Observational data about human behavior is often heterogeneous, i.e., ge...
research
10/02/2019

AI Assisted Annotator using Reinforcement Learning

Healthcare data suffers from both noise and lack of ground truth. The co...
research
06/15/2020

Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform

Many platforms collect crowdsourced information primarily from volunteer...

Please sign up or login with your details

Forgot password? Click here to reset