Towards automation of data quality system for CERN CMS experiment

09/25/2017
by   Maxim Borisyak, et al.
0

Daily operation of a large-scale experiment is a challenging task, particularly from perspectives of routine monitoring of quality for data being taken. We describe an approach that uses Machine Learning for the automated system to monitor data quality, which is based on partial use of data qualified manually by detector experts. The system automatically classifies marginal cases: both of good an bad data, and use human expert decision to classify remaining "grey area" cases. This study uses collision data collected by the CMS experiment at LHC in 2010. We demonstrate that proposed workflow is able to automatically process at least 20% of samples without noticeable degradation of the result.

READ FULL TEXT
research
07/27/2018

Detector monitoring with artificial neural networks at the CMS experiment at the CERN Large Hadron Collider

Reliable data quality monitoring is a key asset in delivering collision ...
research
09/14/2020

Data Quality Evaluation using Probability Models

This paper discusses an approach with machine-learning probability model...
research
02/13/2019

ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning

To relieve the pain of manually selecting machine learning algorithms an...
research
08/31/2016

Measuring the Quality of Exercises

This work explores the problem of exercise quality measurement since it ...
research
09/19/2023

Semi-automatic staging area for high-quality structured data extraction from scientific literature

In this study, we propose a staging area for ingesting new superconducto...
research
03/23/2023

Predicting the Future of the CMS Detector: Crystal Radiation Damage and Machine Learning at the LHC

The 75,848 lead tungstate crystals in CMS experiment at the CERN Large H...

Please sign up or login with your details

Forgot password? Click here to reset