Building a Pilot Software Quality-in-Use Benchmark Dataset

09/18/2015
by   Issa Atoum, et al.
0

Prepared domain specific datasets plays an important role to supervised learning approaches. In this article a new sentence dataset for software quality-in-use is proposed. Three experts were chosen to annotate the data using a proposed annotation scheme. Then the data were reconciled in a (no match eliminate) process to reduce bias. The Kappa, k statistics revealed an acceptable level of agreement; moderate to substantial agreement between the experts. The built data can be used to evaluate software quality-in-use models in sentiment analysis models. Moreover, the annotation scheme can be used to extend the current dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2023

CroSentiNews 2.0: A Sentence-Level News Sentiment Corpus

This article presents a sentence-level sentiment dataset for the Croatia...
research
01/23/2018

SentiPers: A Sentiment Analysis Corpus for Persian

Sentiment Analysis (SA) is a major field of study in natural language pr...
research
09/18/2023

The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings

Sentiments inherently drive politics. How we receive and process informa...
research
09/28/2021

Agreeing to Disagree: Annotating Offensive Language Datasets with Annotators' Disagreement

Since state-of-the-art approaches to offensive language detection rely o...
research
03/24/2015

Measuring Software Quality in Use: State-of-the-Art and Research Challenges

Software quality in use comprises quality from the user's perspective. I...
research
04/11/2020

Classifying Constructive Comments

We introduce the Constructive Comments Corpus (C3), comprised of 12,000 ...
research
01/30/2015

Towards Resolving Software Quality-in-Use Measurement Challenges

Software quality-in-use comprehends the quality from user's perspectives...

Please sign up or login with your details

Forgot password? Click here to reset