Pop Quiz! Can a Large Language Model Help With Reverse Engineering?

02/02/2022
by   Hammond Pearce, et al.
0

Large language models (such as OpenAI's Codex) have demonstrated impressive zero-shot multi-task capabilities in the software domain, including code explanation. In this work, we examine if this ability can be used to help with reverse engineering. Specifically, we investigate prompting Codex to identify the purpose, capabilities, and important variable names or values from code, even when the code is produced through decompilation. Alongside an examination of the model's responses in answering open-ended questions, we devise a true/false quiz framework to characterize the performance of the language model. We present an extensive quantitative analysis of the measured performance of the language model on a set of program purpose identification and information extraction tasks: of the 136,260 questions we posed, it answered 72,754 correctly. A key takeaway is that while promising, LLMs are not yet ready for zero-shot reverse engineering.

READ FULL TEXT

Authors

page 3

page 10

page 11

page 12

page 13

page 17

page 18

02/07/2022

Cedille: A large autoregressive French language model

Scaling up the size and training of autoregressive language models has e...
05/25/2022

Large Language Models are Zero-Shot Clinical Information Extractors

We show that large language models, such as GPT-3, perform well at zero-...
05/30/2022

Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task

Recent work has shown that language models scaled to billions of paramet...
04/18/2022

Zero-shot Entity and Tweet Characterization with Designed Conditional Prompts and Contexts

Online news and social media have been the de facto mediums to dissemina...
04/28/2019

A Feature Based Methodology for Variable Requirements Reverse Engineering

In the past years, software reverse engineering dealt with source code u...
01/28/2022

Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model

Pretrained general-purpose language models can achieve state-of-the-art ...
06/28/2021

What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 – MeasEval

In the summer of 2020 OpenAI released its GPT-3 autoregressive language ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.