On the Feasibility of Specialized Ability Extracting for Large Language Code Models

03/06/2023
by   Zongjie Li, et al.
0

Recent progress in large language code models (LLCMs) has led to a dramatic surge in the use of software development. Nevertheless, it is widely known that training a well-performed LLCM requires a plethora of workforce for collecting the data and high quality annotation. Additionally, the training dataset may be proprietary (or partially open source to the public), and the training process is often conducted on a large-scale cluster of GPUs with high costs. Inspired by the recent success of imitation attacks in extracting computer vision and natural language models, this work launches the first imitation attack on LLCMs: by querying a target LLCM with carefully-designed queries and collecting the outputs, the adversary can train an imitation model that manifests close behavior with the target LLCM. We systematically investigate the effectiveness of launching imitation attacks under different query schemes and different LLCM tasks. We also design novel methods to polish the LLCM outputs, resulting in an effective imitation training process. We summarize our findings and provide lessons harvested in this study that can help better depict the attack surface of LLCMs. Our research contributes to the growing body of knowledge on imitation attacks and defenses in deep neural models, particularly in the domain of code related tasks.

READ FULL TEXT
research
03/28/2020

Adversarial Imitation Attack

Deep learning models are known to be vulnerable to adversarial examples....
research
02/08/2023

Training-free Lexical Backdoor Attacks on Language Models

Large-scale language models have achieved tremendous success across vari...
research
05/25/2023

The False Promise of Imitating Proprietary LLMs

An emerging method to cheaply improve a weaker language model is to fine...
research
04/30/2020

Imitation Attacks and Defenses for Black-box Machine Translation Systems

We consider an adversary looking to steal or attack a black-box machine ...
research
09/14/2022

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

Neural text ranking models have witnessed significant advancement and ar...
research
05/24/2023

Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks

Recent explorations with commercial Large Language Models (LLMs) have sh...
research
08/16/2023

Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models

Large language models (LLMs), such as ChatGPT, have emerged with astonis...

Please sign up or login with your details

Forgot password? Click here to reset