Does Symbolic Knowledge Prevent Adversarial Fooling?

12/19/2019
by   Stefano Teso, et al.
0

Arguments in favor of injecting symbolic knowledge into neural architectures abound. When done right, constraining a sub-symbolic model can substantially improve its performance and sample complexity and prevent it from predicting invalid configurations. Focusing on deep probabilistic (logical) graphical models – i.e., constrained joint distributions whose parameters are determined (in part) by neural nets based on low-level inputs – we draw attention to an elementary but unintended consequence of symbolic knowledge: that the resulting constraints can propagate the negative effects of adversarial examples.

READ FULL TEXT

page 1

page 2

research
07/23/2020

Scalable Inference of Symbolic Adversarial Examples

We present a novel method for generating symbolic adversarial examples: ...
research
01/20/2023

Generative Logic with Time: Beyond Logical Consistency and Statistical Possibility

This paper gives a theory of inference to logically reason symbolic know...
research
02/27/2021

NEUROSPF: A tool for the Symbolic Analysis of Neural Networks

This paper presents NEUROSPF, a tool for the symbolic analysis of neural...
research
01/02/2017

Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different Levels of Representation

During the last decades, many cognitive architectures (CAs) have been re...
research
04/22/2023

Learning Symbolic Representations Through Joint GEnerative and DIscriminative Training

We introduce GEDI, a Bayesian framework that combines existing self-supe...
research
09/06/2022

Scalable Regularization of Scene Graph Generation Models using Symbolic Theories

Several techniques have recently aimed to improve the performance of dee...
research
02/28/2023

Semantic Strengthening of Neuro-Symbolic Learning

Numerous neuro-symbolic approaches have recently been proposed typically...

Please sign up or login with your details

Forgot password? Click here to reset