OpenAttack: An Open-source Textual Adversarial Attack Toolkit

09/19/2020
by   Guoyang Zeng, et al.
0

Textual adversarial attacking has received wide and increasing attention in recent years. Various attack models have been proposed, which are enormously distinct and implemented with different programming frameworks and settings. These facts hinder quick utilization and apt comparison of attack models. In this paper, we present an open-source textual adversarial attack toolkit named OpenAttack. It currently builds in 12 typical attack models that cover all the attack types. Its highly inclusive modular design not only supports quick utilization of existing attack models, but also enables great flexibility and extensibility. OpenAttack has broad uses including comparing and evaluating attack models, measuring robustness of a victim model, assisting in developing new attack models, and adversarial training. Source code, built-in models and documentation can be obtained at https://github.com/thunlp/OpenAttack.

READ FULL TEXT

page 3

page 5

research
04/14/2018

CytonRL: an Efficient Reinforcement Learning Open-source Toolkit Implemented in C++

This paper presents an open-source enforcement learning toolkit named Cy...
research
10/05/2020

TextAttack: Lessons learned in designing Python frameworks for NLP

TextAttack is an open-source Python toolkit for adversarial attacks, adv...
research
09/13/2023

Towards the TopMost: A Topic Modeling System Toolkit

Topic models have been proposed for decades with various applications an...
research
01/04/2021

CRSLab: An Open-Source Toolkit for Building Conversational Recommender System

In recent years, conversational recommender system (CRS) has received mu...
research
09/09/2021

Multi-granularity Textual Adversarial Attack with Behavior Cloning

Recently, the textual adversarial attack models become increasingly popu...
research
09/28/2019

OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction

OpenNRE is an open-source and extensible toolkit that provides a unified...
research
12/18/2020

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

With the successful application of deep learning models in many real-wor...

Please sign up or login with your details

Forgot password? Click here to reset