A Survey on Online Judge Systems and Their Applications

10/14/2017
by   Szymon Wasik, et al.
0

Online judges are systems designed for the reliable evaluation of algorithm source code submitted by users, which is next compiled and tested in a homogeneous environment. Online judges are becoming popular in various applications. Thus, we would like to review the state of the art for these systems. We classify them according to their principal objectives into systems supporting organization of competitive programming contests, enhancing education and recruitment processes, facilitating the solving of data mining challenges, online compilers and development platforms integrated as components of other custom systems. Moreover, we introduce a formal definition of an online judge system and summarize the common evaluation methodology supported by such systems. Finally, we briefly discuss an Optil.io platform as an example of an online judge system, which has been proposed for the solving of complex optimization problems. We also analyze the competition results conducted using this platform. The competition proved that online judge systems, strengthened by crowdsourcing concepts, can be successfully applied to accurately and efficiently solve complex industrial- and science-driven challenges.

READ FULL TEXT
research
07/08/2023

Exploring Automated Code Evaluation Systems and Resources for Code Analysis: A Comprehensive Survey

The automated code evaluation system (AES) is mainly designed to reliabl...
research
10/10/2021

Brilliant Challenges Optimization Problem Submission Contest Final Report

This paper concludes the Brilliant Challenges contest. Participants had ...
research
10/21/2017

The Design and Implementation of Modern Online Programming Competitions

This paper presents a framework for the implementation of online program...
research
10/18/2017

The Robust Reading Competition Annotation and Evaluation Platform

The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-est...
research
06/03/2022

Automated Feedback Generation for Competition-Level Code

Competitive programming has become a popular way for programmers to test...
research
07/14/2018

Evaluation as a Service architecture and crowdsourced problems solving implemented in Optil.io platform

Reliable and trustworthy evaluation of algorithms is a challenging proce...
research
12/23/2018

A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback from Domain Experts

Data extracted from software repositories is used intensively in Softwar...

Please sign up or login with your details

Forgot password? Click here to reset