Toward Reproducing Network Research Results Using Large Language Models

09/09/2023
by   Qiao Xiang, et al.
0

Reproducing research results in the networking community is important for both academia and industry. The current best practice typically resorts to three approaches: (1) looking for publicly available prototypes; (2) contacting the authors to get a private prototype; and (3) manually implementing a prototype following the description of the publication. However, most published network research does not have public prototypes and private prototypes are hard to get. As such, most reproducing efforts are spent on manual implementation based on the publications, which is both time and labor consuming and error-prone. In this paper, we boldly propose reproducing network research results using the emerging large language models (LLMs). In particular, we first prove its feasibility with a small-scale experiment, in which four students with essential networking knowledge each reproduces a different networking system published in prominent conferences and journals by prompt engineering ChatGPT. We report the experiment's observations and lessons and discuss future open research questions of this proposal. This work raises no ethical issue.

READ FULL TEXT
research
03/04/2019

A Bibliometric Analysis of Publications in Computer Networking Research

This study uses the article content and metadata of four important compu...
research
09/08/2021

DeepZensols: Deep Natural Language Processing Framework

Reproducing results in publications by distributing publicly available s...
research
02/27/2023

LLaMA: Open and Efficient Foundation Language Models

We introduce LLaMA, a collection of foundation language models ranging f...
research
12/16/2022

Lessons learned from the evaluation of Spanish Language Models

Given the impact of language models on the field of Natural Language Pro...
research
10/20/2022

Using Large Language Models to Enhance Programming Error Messages

A key part of learning to program is learning to understand programming ...
research
08/18/2022

Challenges and opportunities in applying Neural Temporal Point Processes to large scale industry data

In this work, we identify open research opportunities in applying Neural...
research
05/29/2017

Who's to say what's funny? A computer using Language Models and Deep Learning, That's Who!

Humor is a defining characteristic of human beings. Our goal is to devel...

Please sign up or login with your details

Forgot password? Click here to reset